Tag Archives: Blog

How goal-based advice combined with a realistic risk framework helps wealth managers grow their business and reduce risks

We believe that goal-based advice is superior to any other form of investment advice as clients always have financial goals. All clients have at least one goal in mind when they invest, even if they only want to increase their wealth or minimise losses. This type of investment advice requires a realistic risk framework which enables accurate wealth projections and a superior user experience.

Todays’ advisory processes rely mostly on outdated, single-period risk frameworks: take the example of Markowitz, a methodology from mid-last century and yet widely used in investment management.

The basic assumption of this methodology is that returns are normally distributed and that the expected returns do not change over time. Most wealth managers still use conventional single-period risk frameworks like Markowitz although it has been proven empirically that returns are not normally distributed: one proof is that capital markets have had higher losses than gains in the past. Negative returns have been observed both in greater magnitude and with a higher probability compared to the ones implied by normally distributed returns (so-called ‘Fat’ Tails).

This mismatch results in a significant liability risk of the wealth manager: all investment portfolios derived from conventional risk frameworks consequently carry a higher downside risk than projected.

Additionally, conventional risk frameworks do not include the clients’ financial goals, savings or liabilities which are all needed for professional financial and wealth planning. Hence, a sound wealth development cannot be forecasted considering the clients’ financial goals. Further, key wealth parameters like inflation cannot be integrated into the wealth forecast, aggravating the above outlined liability risk of the wealth manager.

To overcome this, wealth managers need a risk framework which is realistic and generates investment recommendations that best fit to the individual situation of the client. Therefore, a multi-period stochastic optimisation risk framework should be used: it simulates wealth development based on the clients’ financial goals accurately and mitigates the liability risk for the wealth manager.

With a multi-period risk framework, a goal-based advisory solution can also be significantly improved: the ideal goal-based advisory solution is responsive (reacting to changes instantly), holistic, individual, and includes the wealth manager’s specific market view. Some of that functionality is enabled by the multi-period risk framework:

Register to our Webinar to find out more and how your advisors can be supported through our solution to grow their business, both from existing or new clients.

Author:

Marc Mettler, Head of Business Development, 3rd-eyes AG

Email: [email protected]

LinkedIn: https://ch.linkedin.com/company/3rd-eyes

Web: www.3rd-eyes.com

Join Marc Mettler on a webinar entitled ‘How Digital Wealth Planning can Help Your Financial Institution Grow AuM and Revenue‘ by 3rd-eyes on 7th March at 3PM London/10AM New York.

Register Here!

What factors determine proper generator sizing?

Power generation is needed in an endless variety of locations and environments, with each installation having different factors affecting generator requirements. Incorrect generator sizing can be disastrous in any power generation project if particular installation factors are not properly taken into account.

Many OEM’s and end users are posed with the task of sourcing the most efficient and cost-effective generator that meets all applicable customer and project certifications and requirements.  Without knowing and understanding all necessary installation details and factors that go into proper generator sizing, this can be a very difficult task.

The operating environment of a generator set is one of the biggest factors that affects the size of machine needed for a given kVA load.  Environments can range from sub-zero to 60° C ambient temperatures and can include harsh conditions and contaminated cooling media, which impact enclosure selection.  Extremely harsh, contaminated operating environments can require a cooling enclosure that will almost double the size of generator needed to produce the same kVA output as a unit in a controlled environment.  Operating altitude is also very important to consider.  As elevation increases, air density and cooling capacity decrease, and therefore a larger cooling system is needed.

In addition to environmental factors, there are many electrical considerations that affect generator sizing as well.  For example, for a given kW required, a lower power factor drives the need for a larger generator.  Also, frequency and RPM (determined by the prime mover) will directly impact generator size.  The physical size of the machine required for the same kVA output can double if speed is cut in half, due to the need for an increase in number of poles.

Additionally, insulation class (temperature rise allowance) is a big component in generator selection. While lower temperature allowance can greatly increase the overall lifetime of your generator, a larger unit will be needed to produce the same kVA as a unit with higher temperature allowance.

As important as any of the topics above when it comes to generator sizing is non-ideal loading conditions.  Possible high harmonic and unbalanced loading levels require a larger unit to handle the maximum level of unbalance that the machine may be exposed to.  This concept applies to motor starting and block loading capabilities too.  A bigger machine is required to minimize the voltage dip caused by motor starting.

The topics discussed above are a brief introduction to all concepts and factors that play an important role in generator sizing.  Our upcoming webinar “Properly Sizing Your Generator” will examine and explain these topics and more in greater detail to ensure that you are selecting the right generator for your next project.

Nidec – Kato Engineering/Leroy-Somer branded low, medium, and high voltage generators are the reliable choice for genset packagers and power producers worldwide.  Our proven engineering expertise and state-of-the-art manufacturing processes ensure that our generators will out-perform and out-last in any environment.  We offer both standard and custom product design solutions to meet any power generation project needs.

Register now for our “Properly Sizing Your Generator” Webinar to learn more detailed information on selecting the optimal generator for your next project.

Learn More at Upcoming Webinar

Understanding all mechanical and electrical considerations that factor into properly sizing a generator is key to ensure trouble-free and efficient generator operation. Register for this webinar here to make sure your future power generation projects are properly planned.

Here are the details of our Webinar:

Date: Wednesday, November 14

Time:  3PM London/10AM New York

Presenter: Jacob Jewison

Job title: Electrical Engineer

 

 

 

 

 

 

 

Join Nidec – Kato Engineering & Leroy-Somer Live Webinar “Properly Sizing Your Generator” on 14 November at 3PM London/10AM New York

Register Here!

Data sharing and ownership: home truths for utilities

Despite a history of insularity, data sharing is an inevitable reality for utilities. So who is liable for that data? Engerati asks data specialists at OSIsoft for insight. Originally published at Engerati on 17 Apr 2018

Data is at the forefront of many utilities’ minds – collecting data, protecting data, distributing it and monetising it. However, are utilities prepared to share their own operational data?

Between the associated risks to both security and competitive advantages, utilities have historically had plenty of reasons to be hesitant in sharing their operational data. However, it is becoming increasingly apparent that there may be a business case for it – should it be executed correctly. Unlocking these benefits will be crucial for forward-thinking utilities looking to adapt to the data-driven market.

To better understand the issues surrounding data sharing for utilities, we turned to specialists William McEvoy, Industry Principal Transmission & Distribution / Distributed Energy Resources and David Thomason, Business Development Executive / Industry Principal – Global Power Generation at OSIsoft.

The need for operational data sharing

David Thomason explains the increasing need for operational data.

He says, “We look at the whole ecosphere around power generation as a community. There’s a whole world of information sharing among them – there’s the power generation companies, the regulation companies, your regional operators… but there’s also others to consider. Your suppliers, vendors, generators – there’s a whole supply chain of demand in terms of data.”

“Data sharing between those entities becomes core to how they’re going to operate. More and more as we bring in data from renewables, distributed energy resources, batteries, these will add new characteristics to the market, and will require a huge amount of operational data to ensure they run efficiently,” he continues.

William McEvoy agrees; “one of the biggest risks facing utilities is reliability and impact on the grid from these new disruptive technologies.”

Sharing this data across the supply chain will provide distributors with crucial insight to their performance and position within the market. The difficulty arises if the cost of sharing data – be it through regulatory penalties or losses in competitive markets – outweighs the associated benefits.

Adding to the difficulty will be the attitude shift required from utilities when sharing this data.

McEvoy explains, “it’s a big paradigm shift for utilities who are used to providing reports to regulators just around events or issues. They look at that reporting as penalty reporting. Now, the concern is shifting towards not only what regulators can do with that data, but what competitors can do with it.”

“If you’re in an open, competitive market, that operational information is very confidential,” seconds Thomason.

Thomason sees this as an avoidable risk factor, should the correct preparations be employed. “Not every piece of data at a detailed level will matter, so to make this work there needs to be some sensitivity around segmenting data,” he explains.

Another suggestion is that distributors that submit their data openly receive compensation for the exposition to competitors.

Data sharing ownership – who is liable?

Beyond the need for operational data sharing within utilities is a larger problem – the liability for the data and its security.

“Regardless of what new technology is put on the grid,” explains McEvoy, “the distributors are still responsible for the grid. So then, the more data they share, the more liability they have.”

Thomason believes the industry hasn’t put enough time or effort into how these ideas of data ownership and responsibility come into play.

He continues, “When these utilities send their raw data, external bodies can do advanced pattern recognition, machine learning, design assessment, really massage that data and provide analytics. The problem then becomes who owns this new, refined data.”

The conversation around ownership here is twofold. On the one hand, owning this data renders the utilities, vendors, or analytics companies responsible for it should something go wrong. However, it also gives them a powerful, valuable asset.

Much like with the conversations around blockchain, it appears some serious forethought is needed before full implementation.

Join OSIsoft Group Live Webinar “Transforming IoT Data into Results for Power” on 21 November at 3PM London/10AM New York

 

Isolation Barrier to Abandon Subsea Pipelines

When normal operation of a pipeline ceases, operators must adopt an appropriate abandonment process. The first step in the abandonment of a pipeline is decommissioning, which is to take it from its operating condition and render it clean and safe on the seabed.

Once decommissioned, pipelines can be recovered to the surface or in some cases left in situ on the seabed and buried by gravel. Pipelines that are severed and removed from the seabed require the bare pipe end to be capped and isolated.

STATS subsea Pipe End Plugs can be supplied as a temporary or permanent cap to terminate open pipe ends. They provide a robust isolation barrier to prevent seawater ingress or residual hydrocarbon escaping into the environment. Pipe End Plugs are supplied with securing wedge taper-lock grips to ensure security of the isolation plug in the pipeline at the specified pressures, and the large section elastomeric compression seal provide a leak-tight seal, even in corroded and pitted pipework.

10″ Subsea Pipe End Plug

These high-performance plugs are lightweight and simple to install by diver or ROV. Once inserted into the open pipe end, the plugs are activated and set in position by applying torque to the plug torque interface. To aid installation the Pipe End Plugs are supplied with a handling bracket and buoyancy as required.

ROV deployed Pipe End Plug modified to allow commissioning of a 22″ pipeline.

The Pipe End Plug lock and seal technology is based on the Tecno Plug® range of pressurised isolation tools which have an extensive track record covering a range of pipe sizes, pressures and mediums including, gas, crude oil and condensate.

42″ Abandonment Plug for a Middle East National Oil Company

STATS Group were approached by a national oil company in the middle east which had a requirement to isolate a 42” subsea pipeline dead leg housed within an oil storage tank, situated in the Persian Gulf. The operator had identified some irregular flow characteristics which made them suspect the integrity of this 42” dead leg. The operator was concerned that over time the problem would worsen, with the potential of water contamination of the oil export. Read how STATS engineered a solution to provide a secure isolation to be deployed subsea via divers into the 42” dead leg through an open flange entry point. Read how we did it in this case study.

Join STATS Group Live Webinar “Decommissioning Ageing Infrastructure: Abandoning or Re-routing Hydrocarbon Pipelines” on 15 November at 3PM London/10AM New York

 

New Challenges Faced by the Brewing Industry

The brewing industry has recently undergone several significant changes. The craft beer market has exploded and, the average beer drinker has developed more sophisticated tastes, opting for higher quality beers. Brewers are looking to expand their business by exporting their product to distant consumers. All the while, the cost of energy, water, and hops  has increased putting breweries bottom line profits at risk. In order to navigate these market shifts, stbreweries must adapt their method of production in order to create a high-quality product, which prioritises flavour and characteristics to appeal to the sophisticated consumer, whilst reducing process costs and increasing the shelf life of their product in order to stermaintain the products stability- so beer reaches the consumer as fresh as it was when it left the brewery.

So, here’s the question, how can brewers make a better tasting beer, with a longer shelf life all whilst reducing the cost of production?

Innovation is the answer

To overcome these challenges, breweries must look to implement innovative solutions throughout their brewing and packaging processes. Recent years have seen the emergence of technologies which can benefit the flavour, cost and shelf life of beer. However, beer has long been considered a combination of art and science, and some brewers are reluctant to switch from traditional methods of brewing to modern scientific methods. However, an increasing number of breweries from the local craft brewery to the global goliaths are discovering the benefits of modern sterile filtration as a method of microbial stabilisation.

Sterile filtration of beer

The sterile filtration of beer is a method of microbial stabilisation which eliminates unwanted contaminants such as Pediococcus damnosus, Brettanomyces bruxellensis, Lactobacillus lindneri and lactobacillus brevis from beer prior to packaging. The microbial stabilisation process and the subsequent elimination of spoilage organisms greatly increases the shelf-life of beer when compared to beer which has not been treated, enabling brewers to ship their product to consumers at greater distances over longer periods of time.

Brewers have historically used other methods of microbial control such as flash pasteurisation, however, these practices have their limitations. The method of flash pasteurisation involves cooking beer, which has been painstakingly brewed with time and care, up to a temperature over 70oC for a short period of time. This process does denature spoilage organisms within the beer, but also damages the delicate flavour and characteristics of the beer.

Sterile filtration is the gentler alternative to flash pasteurisation. The implementation of sterile filtration eliminates all spoilage micro-organisms but does not alter or damage any of the desired characteristics and flavours the master brewer achieved.

What’s more, sterile filtration has a lower operating cost compared to flash pasteurisation due to the high electrical energy consumption required to heat such large volumes of liquid to 70oC, the consequential water consumption and beer loss experienced when flash pasteurisers deviate from critical flow rate – a common issue for brewers running flash pasteurisation.

To find out more about sterile filtration, sign up to our webinar “Extending Beer Shelf-Life While Reducing Processing Costs” which will be held on the 16th of April.

Achieving Excellence in Plant Operations

By Ferenc Tóth, Business Solutions Consultant, ABB Enterprise Software

Being a Plant Operator is a very challenging job. If I Google Plant Operator job descriptions, there are exhaustive lists of responsibilities.  Without trying to provide a full account here, let’s look at the most important ones:

  • Operate processing equipment and log equipment checks to follow standard operating procedures and optimizing plant efficiencies.
  • Regulate valves, compressors, pumps and auxiliary equipment to direct product flow.
  • Support smooth crush plant running by coordinating with plant management.
  • Analyze specifications and control petroleum refining and processing units operations to produce lubricating oils and fuel through distillation, absorption, extraction, catalytic cracking, isomerization, coking and alkylation.
  • Test liquids and gases for chemical characteristics and colour.
  • Inspect and adjust damper controls, heaters and furnaces.
  • Visit unit to verify efficient operating conditions.
  • Read temperature in pressure gauges and flowmeters, record readings and compile in records.
  • Inspect equipment to determine nature and location of malfunction like faulty valves, breakages and leaks.
  • Clean processing units’ interior by circulating solvent and chemicals.
  • Determine malfunctioning units through meters and gauges or lights and horn sounds.
  • Set knobs, switches, levers, valves and index arms to control process variables like vacuum, time, catalyst, temperature and flows.
  • Read processing schedules, operate logs, test oil sample results and identify equipment controls changes to produce specified product quantity and quality.
  • Comply with best practices, standard operating procedures and develop and maintain continuous improvement efforts.
  • Control processing units’ activities.

This is a very wide variety of tasks to perform, and the results of them – let them be inspection data, field measurements, sampling data, emission values, to name but a few – are valuable input to someone in a similar or maybe very different role in the organisation.

Some of these tasks directly contribute to the most important challenges of the plant.  No wonder companies are putting a lot of effort into establishing and maintaining a safety culture.  Apart from meeting Worker Safety standards, efficiency is usually a key word in a plant environment.  Higher efficiency level can be reached in a number of ways, including the enhancement of Plant Reliability through extending asset life or preserving plant integrity and configuration. Utilising common best practices can help to do things once, and do them right for the first time, increasing Productivity for the plant. From another perspective, efficiency can be positively influenced by Cost reduction as well. Effective issue prioritisation, leveraging performance improvements to processes and equipment, lowering the IT costs and shortening outage duration can all help achieving that goal.  Last but not least, the plant needs to work according to the rules of Governance and Compliance.  Among others, the business needs to ensure public and stakeholder confidence, regulatory compliance and should aim for creating an environment of continuous improvement.

If I try to summarize it in one sentence I would say: Efficient as possible at the lowest cost possible, safely.

So, what can go wrong?   Without a digitalised solution that automates the most important Plant Operator tasks, a lot of tasks can fail. The most common potential problems include:

  • Limited or no access to critical operation data throughout the organisation
  • Poor communication between departments and groups such as maintenance and operations
  • Inconsistency in log entry, information recorded, increasing chance of errors and non-compliance
  • Information missed or incorrectly transferred between shifts during shift turnover/handover
  • Unexpected overruns on plant shutdowns/outages
  • Unpredictable equipment failure and/or equipment replacement
  • Lack of integration between systems
  • Aging workforce and high employees turnover

If I was only mentioning the problems without offering some kind of solution, you could be rightly saying that I am a pessimistic person (which I am not).    If you are interested in the solution ABB Enterprise Software provides, please join me for our upcoming webinar on 25th October 2017.

Printers & Converters: Can you respond to your customers more effectively than your competition?

Before you get a chance to impress customers with your technical capabilities, low prices, and commitment to their success you need to perform some very basic tasks. How quickly can you get them a quote? How confident are you in the accuracy of that quote? How quickly can you confirm a ship date? How often do you believe you will actually hit that date?

It takes a lot of effort to put yourself in front of a new prospect. Phone calls, emails, late flights, long drives…these efforts can drag on for months or even years in some cases before you get a chance to convince someone that they should buy from you and your company. There’s nothing more frustrating than putting in the work to get to that point and failing to deliver. Customers expect a lot. They want an accurate, competitive quote, a realistic lead time, and a smooth approval process. Giving them these things can separate you from your competitors. Failure to do so can separate you from prospects and existing customers. Trying to respond to customers without adequate systems and tools quickly becomes a frustrating proposition.

These tasks may seem basic but they typically require input from a variety of people in different roles. Trying to turn a quote around in a hurry? What if you run digital, flexo, offset, and gravure presses? You probably have to quote a job five different ways to know what is best for you and your customer. Different technical specs and broad quantity breaks can make or break you. What about pricing multiple SKUs together? Building tooling and prepress charges into the unit cost? It goes on and on and gets increasingly complicated. Your customer is not concerned. They want a price and they want it quickly. If you’re on the road and you rely on an estimating department you might wait two days or more to get an answer. All the while the clock is ticking. Your customer is thinking about other projects, meeting with other vendors, and forgetting about the great meeting they had with you.

What about lead time? Do you know your true capacity? Can your production team get back to you with reasonable ship dates? Again, there are many factors involved including availability of tooling, substrates, specialty inks, and plates or cylinders. What if you get the order and you’re over capacity? You could move it from a 40” press to a digital press to get it out the door but should you? Will you still make any money? Without the right tools in place these decisions are often made under duress and with little real understanding of the impact they have on your company.

My name is Michael Hunter and I’m the VP of sales for LSI. We are a software development and consulting firm focused on helping manufacturers improve their process using a range of tools with a focus on planning and scheduling. Before coming to LSI I worked in label printing for many years, mostly on the production side but also in various sales roles. I’ve been the sales rep on the road fighting to get an opportunity. At other times I’ve been in charge of production trying to appease a sales force and get everyone’s orders out on time.  What we have developed are tools to bring entire organizations together, supporting your sales goals, defining your production limitations, allowing you to make SMART decisions.

I hope you can join us for our upcoming webinar to learn more about LSI Print Control and our web based estimating systems, advanced planning and scheduling tools, and shop floor data collection systems.

 

Plate heat exchanger certification – a win-win game for the HVAC industry

All market places have parties with different interests and drives. The buying side wants value for money and the selling side is striving for profit. The Plate heat exchanger market is no exemption. Can we change the playground and join forces in a win-win game? A step on the way is called AHRI Performance Certification.

Heat duty calculation, an important step in the specification process, is essentially an assessment of the requirements for maximum performance by the consultant and designer of the heating or cooling system. A common industry challenge is the varying input of sizing tolerances and component cost calculations which complicate the comparison of heat exchanger performances. This makes it harder to choose the optimal heat exchanger for a given application and increases the risk that it will underperform. The built-in uncertainty for everyone involved in recommending, selecting and using heat exchangers – consultants, specifiers and end customers – undermines the confidence of the industry as a whole. At the end of the day, it is the end users that suffer the risk of higher operating costs and compromised environmental performances.

Alfa Laval is committed in driving the manufacturing side to join forces to achieve higher performance assurance. Already in the 1990s, contact was initiated with the non-profit organization the Air-Conditioning, Heating, and Refrigeration Institute (AHRI) for developing a uniform performance certification standard for Plate heat exchangers. The LLHE Certification program was launched in the US in 2001 and is, since 2012, a global certification.

The AHRI Liquid to Liquid Heat Exchangers (LLHE) Certification Program provides third-party verification of thermal performance – it gives independent assurance that the plate heat exchanger will perform in accordance with the manufacturer’s published ratings and has been designed with a reasonable sizing tolerance. The AHRI certification is currently the only program available in the world for plate heat exchangers and any supplier can be part of the LLHE Certification Program.

Multiple long-term benefits
Wider adoption of performance certification by heat exchanger suppliers would remove uncertainty about thermal performance and generate a wide range of benefits.

The certification creates a level playing field for comparing thermal performances and the price/performance ratio of plate heat exchangers. For system builders and contractors, this reduces time spent in the selection process and ensures that the optimum product is selected.

The optimum product ensures end users with a heat exchanger capacity that is sufficient to meet set performance goals and operation of more cost-efficient systems. As calculations have already been verified, costs are lower for field tests and additional component performance margins. Significantly reduced lifetime operating costs and lower energy consumption are key benefits, but also complying to targets for power consumption and carbon footprint.

In general, AHRI performance certification enhances confidence in the industry and provides peace of mind for those who rely on system performance – suppliers, consultants, contractors and end users. In the longer term, it will eliminate concern about calculations and promote a stronger focus on energy efficiency, which will foster innovation and stimulate manufacturers to develop more efficient products.

Alfa Laval promotes AHRI certification
Alfa Laval has chosen to take an active role in promoting AHRI certification and the heat exchangers in the Alfa Laval AlfaQ™ series were the first to be AHRI certified. Since the start Alfa Laval has accomplished a 100% success rate in the AHRI performance certification program.

In our webinar, Stefan Linde, Market Manager at Alfa Laval, will share the how’s and why’s of working with the AHRI performance certification. He will present practical and useful information for consultants, specifiers and end customers. Register now!

How Can You Make a Smart City a Genius? A Look at Network Requirements for Critical Infrastructure and Services

by Kevin Fitzgibbons

For municipalities around the globe, making cities smarter is no longer a luxury or a long-term goal – it’s a necessity, and it needs to happen now.

There are plenty of applications and devices that can enable better city management, faster and safer mass transit, better equipped first responders and automated machine-to-machine functionality – but simply implementing these devices and applications is not enough. To fully leverage technology, smart cities must consider a wide-area, high-bandwidth private mobile network as much a part of their core infrastructure as roads and bridges and the electrical grid.

A fully optimized mobile network gives a city a number of critical capabilities:

• Quick-deploy ad hoc networks for public safety
• Wi-Fi in subway, trains and stations for passenger use as well as dynamic advertising and other revenue-generating opportunities
• Real-time CCTV for first responders and mass transit
• Intelligent traffic systems
• Energy and water management devices
• Payment and information kiosks
• Connected city infrastructure

However, the functionality of all of these applications is contingent on a network’s ability to communicate in real time. This demands a unified and secure network that connects in-motion and stationary people and assets and ensures reliable and continuous wireless communications between public transportation, command centers, public safety personnel and others.

You might be asking: Does this kind of network even exist? Will it be reliable in the event of an emergency? How secure is it? Is it adaptable to various needs and applications? Is it scalable and flexible?

One communications network that meets and exceeds all of a smart city’s requirements is Rajant’s Kinetic Mesh.

This is a type of wireless network that is truly mobile – a “make, make, make and never break” approach establishes and maintains connections with every radio, or node, it sees to instantaneously route data via the best available traffic path and frequency. Each node serves as singular infrastructure, which enables all devices and the network itself to be mobile – it can move around a city with no loss of connectivity.

The network enables the nodes to manage interference, working in concert with networking software to deliver data via the fastest available path; routes are evaluated on a packet-by-packet basis, with no need for input from the network administrator. The nodes seamlessly integrate with each other as well as cellular data/LTE networks and third-party satellite.

If one path becomes unavailable for any reason – such as power loss – the network routes around it, eliminating any downtime. It is not uncommon for a node to have several hundred peer connections, giving it the ability to use any link at any time. This allows Kinetic Mesh to be scalable; the more nodes in a network, the better the performance.

It’s also highly secure; Kinetic Mesh offers the same type of encryption that wireless clients require for WiFi as well as end-to-end encryption. When encrypted information flows through the mesh and comes out another node, it stays encrypted all the way through, and is not decrypted until it is delivered to its final destination, ensuring privacy.

At each hop in the network, Kinetic Mesh provides a per-hop authentication for each packet, as well as secure authentication of message. This detects whether data has been tampered with while ensuring a packet of information received by a node came from a trusted peer – protecting from a type of cyber-attack called packet injection, when attackers try to “throw” packets into the network to disrupt traffic.

Despite the immense amounts of data now at our fingertips, we’ll never be able to truly predict the future, but using Kinetic Mesh as a communications network for a city means all devices and applications are always connected, increasing overall network redundancy and ensuring continuous functioning of applications such as emergency services, water and electricity, transit and traffic systems – even the event of a natural or manmade disaster.

Does this sound like something your city or municipality could use?

To learn more how Kinetic Mesh is fulfilling the needs of smart cities by ensuring mission-critical applications never fail, join us for a webinar on how to implement a Kinetic Mesh communications systems for mass transportation systems, smart city initiatives and public safety within a city or region. I’ll cover current network infrastructure shortcomings, an analysis of key applications that will make cities smarter and more resilient, and the significant role mobility plays in today’s smart city and transportation environment.

TOP 3 QUESTIONS TO ADDRESS WHEN PLANNING FOR THE INTERNET OF THINGS

by Kristen Weygandt  |  May 6, 2016

IFS Banner

Business insight from Microsoft, IFS and IDC share what actions you should take to maximize the benefits of the Internet of Things.

Talk of IoT is everywhere. It’s in the news, across social media, a topic of discussion in the office — it’s truly everywhere. But, that shouldn’t come to us as a shock because the Internet of Things (IoT) is a part of our daily lives whether we consciously realize it or not. From mobile devices to buildings to heart monitoring implants to vehicles, these and many other things are all a part of this network of physical objects known as the “Internet of Things.”

HOW IOT WILL IMPACT BUSINESS

Now that IoT has transformed from a “hot-topic” to a driver of business value, organizations need to start thinking strategically. With IoT driving digital transformation, it’s estimated that 75% of large and mid-size companies will have spent $1.5 trillion on IoT solutions by 2020. That being said, no matter your company’s industry or location, having an IoT strategy in place is crucial. These strategies must set the business goals, define the technology requirements and articulate how IoT will ensure the necessary digital transformation of the company.

In a recent IDC report, “Digital Transformation — An Internet of Things Perspective“, IDC explains how IoT will affect business in the coming years, saying:

“The Internet of Things will inevitably affect companies and organizations as the number of connected endpoints grows exponentially and produces a massive amount of data.”

They also stress the importance of why companies should start formulating their IoT strategy, explaining that:

“The explosion in the number of devices and data volumes will drive the need for more enterprise systems to deploy, manage, and make use of these. Establishing interoperability and connectivity standards will be become a priority.”

FORMULATING AN INTERNET OF THINGS STRATEGY

Before you formulate your IoT strategy, check your data and see how it can be used, evaluate and maybe change processes and IT solutions, and gather advice from experts and those who can share their first-hand experiences with you.

On June 7, 2016, Microsoft, IFS and IDC are putting on a webinar titled, “Top 3 Questions to Address When Planning for IoT,” to share their view on how to effectively realize business value from the IoT. In this webinar you will learn from three industry experts who will share their experience and views on how to embark on this journey — Microsoft from a technology perspective, IFS from a business software perspective and IDC sharing today and tomorrow’s best practices. Registration is free but the insight you’ll receive by attending is invaluable.

TOP 3 QUESTIONS TO ADDRESS WHEN PLANNING FOR IOT WEBINAR

The time for action is now. Register today and learn from the experts on how your business can put a strategy in place to maximize the benefits associated with IoT.