Engineering firms are going undergoing a fundamental transition. In fact Flutura from its global industrial IOT engagements in last 2 years has distilled out 6 core transitions at play . Depending upon the vertical (utility,heavy engineering, oil n gas) and market ( advanced / emerging) each of these transitions have different velocities and more importantly these transitions are irreversible. So what are the 6 key irreversible transitions which are accelerating and gaining momentum in the fast changing market place?

Transition-1 : From Engineering to Intelligence
Looking at an an engineering product, thru the eyes of a customer, one can differentiate a product on the engineering dimension or the intelligence dimension. Which provides more headroom to impact the consumers perception of value in the product - the intelligence or the engineering ?
For example, the car is a classic example where the headroom available to impact customer using intelligence far exceeds the headroom available to impact using engineering. Its relatively easy in the auto industry to create customer "wow" by providing mileage intelligence, personalised car settings or "Onstar" like remote intelligence than by a disruptive change to engine design
Transition-2 : From Data Streams to Revenue Streams
Is the increase in data streams matched by a corresponding increase in revenue streams ?
For example, in building management systems, with the decreased cost of instrumentation of assets like boilers, chillers , lifts etc there is going to be an explosion in data streams. The key question is whether these explosion in data streams also results in an increase in revenue stream from newer business models around remote monitoring, predictive maintenance etc.
Transition-3 : From Loosely coupled IT-OT systems to Tighter IT-OT coupling
There are 2 worlds in Engineering industries - the Operational Technology world of SCADA/PLC/Historians and out of condition monitoring software where ownership is with engineering and the world of business applications where CIO organisations own it
For example, in the oil and gas industry condition monitoring systems and process historians record a lot of data regarding the state of the process. But data from operational systems like SCADA/Historians are not "fused" and triangulated with the rest of the IT data say from shift management systems and employee databases or identity management systems which tell who operated the asset.
Transition-4 : From Fragmentation to Standardisation
One of the reasons the explosion of consumer internet was interoperability and standards. One of the main stumbling blocks of industrial IOT is the variety of different ways in which assets "talk" to each other. But this is changing
For example, in the Utility industry there is OSGP ( Open Smart Grid Protocol ) whose intention is to harmonise and speak a common energy exchange language
Similarly in the Oil n Gas industry standards like WITSML (Well site Information Transfer Standard Mark-up Language) provides non proprietary interoperability standards to exchange digitised oil well information
Transition-5 : From Edge Intelligence to Remote Intelligence
Flutura has also observed the increasing dependence on remote intelligence. For example typically the landing gear is controlled by a pilot instruction ( edge intelligence ). Supposing an adverse event happens and the pilot is incapacitated, in order to land the aircraft safely ground control can take over the flight using remote intelligence .Pilot programs under way to implement this use case to augment safety of passengers.
Transition-6 : From Pipe to Platform
Typically when an asset was being monitored, only single event streams where being monitored using condition monitoring systems ( How many times did the process exceed its bounds ?). But engineering firms are realising that in order to deeply understand the context of the asset the historian data has to be triangulated using ambient condition data and operator/human context. Doing so requires transitioning to a multi dimensional "platform" mindset as opposed to a "pipe" mindset which is very "uni" dimensional
Closing Thoughts ...
Flutura has observed that these transitions do have friction for adoption . But this friction is being "lubricated" or removed by the emergence of disruptive companies, regulatory pressure and customer expectations. Even if one of these Industrial IOT transitionsreach a tipping point it can disrupt the engineering market place dynamics considerably and as Jay Asher said
“You can't stop the future
You can't rewind the past
The only way to learn the secret to press play.”
Flutura is excited as the Industrial world is increasingly moving towards pressing "play" :)

There is a seismic shift under way in the engineering industries. The decreased cost of sensors, the increased amount of instrumentation on assets and need for new revenue streams are forcing engineering firms to re-imagine business models.  The fusion of “atoms with bytes” promises to unlock new value previously unrecognised which generate additional revenue streams predicated on intelligence generated from the data. As machines increasingly become nodes in a vast array of industrial network, value is shifting towards the intelligence which controls machines. Intelligent Platformization of machines has begun

Keeping in mind this fundamental shift in value from atoms to intelligence, Flutura has defined 5 levels of maturity to assess the machine intelligence quotient of an engineering organisation. The highest level of maturity is "Facebook of machines" with ubiquitous sensor connectivity and the lowest is an asset which is "unplugged" where the device is offline. As organisations embark on a journey to intensify the intelligence layer in their IOT offering it makes sense to map where they are in their current state of maturity.

The 5 levels of machine intelligence with specific illustrative examples are outlined below


This is the lowest level in the maturity in the maturity map. At this level of maturity, the device or sensor is 'unplugged' from the network. There are no “eyes” to see the state of the machines at any point in time. The machine is offline to the engineering organisation. A vast majority of engineering firms manufacture assets which fall into this category. For example a vast variety of industrial pumps still are completely mechanical devices with no sensors to instrument them


This is the next level of machine intelligence which exists in the maturity curve. At this level of intelligence the device is connected to the network. There is also rudimentary intelligence exists on the device to take corrective healing action. Examples of assets having edge intelligence include cars which can alert the drivers to basic conditions which need intervention. Other examples include a boiler which has edge intelligence to switch on/switch off valves based on steam pressure


At this stage the device can be remotely monitored and monitored from a central command centre network. For example Flutura was working with an asset service provider who was monitoring the health of connected buildings geographically dispersed and monitored in real time. This requires the ability of the platform to ingest billions of events from boilers, chillers, alarms etc. in real time and make sense of which assets need intervention from the command centre and which assets are healthy.


This is taking the intimate understanding of assets to the next level. This involves triangulating patterns from historical asset data, its ambient conditions etc. to predict failures, defects etc. At this stage, there is enough causal knowledge available to model when the device would break down and proactively trigger an intervention be it a field visit or a part replacement. 


This is the most evolved state of engineering intelligence where All assets the organisation has deployed is connected in real time seamlessly to field force, head office engineers and command centre observers in real time. Very few of global engineering firms are at this level of maturity.

Closing thoughts

As business models evolve driven by pervasive hyper connectivity of devices across industries like Utility, energy, Oil n Gas, Intelligent building management systems etc, competitive advantage will shift towards differentiated value adding intelligence platforms. Flutura intends to leverage its Cerebra Signal Studio Platform to accelerate signal detection and deliver value added business outcomes.

Nikolai Tesla predicted in 1926 that,  'When wireless is perfectly applied the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole...and the instruments through which we shall be able to do this will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket' 

It has come true almost half a century later. After that though, the advent of connected devices has exponentially increased, these devices emit data every microsecond. It has given rise to the IOT phenomenon. But why is IOT so important? 

Most of the Big data analytics right now is performed on human generated data, and it has resulted in unparalled ROIs, but what yet to be explored is the potential in machine data. The amount of data created by machines is so huge and continuous, that human generated data pales in comparison

The number of connected devices is increasing by the second. An average American household has 6 devices connected to the internet. Stored in these devices is the capability to predict events way before they happen. 

These devices will start creating so much data that current databases will find it hard to fathom, let alone digest. New pathways have to be discovered, new methods to be invented. That's going to be an amazing challenge to crack.

In every sphere, IOT can unlock opportunities like never before. It will encompass every phase of our life, right from our birth to death, and probably after. Healthcare, Lifestyle, Work, Energy, Entertainment, you name it, and IOT touches it. 

This is exactly why now is the right time to get on the IOT bandwagon. At Flutura, we have started taking strides towards each of these and more spheres concentrating on hardcore engineering areas like Utilities, Oil and Gas, Manufacturing etc to bring down their revenue leakages and improve operational efficiencies. 

A guest post by Clif Triplett, ex-CIO, Baker-Hughes and member of Flutura advisory board.

Big Data”; another big idea like “Cloud”; is it something that we should be embracing right now or be lost forever?   Is this another vague term that can be applied to countless projects?  Everyone is doing it our vendors will tell us.  You may have the question, “Am I actually running a “Big Data” project or do I just have to deal with a lot of data?”  What is “Big”?   

As I look forward, I think Big Data will begin to take different distinct forms as this other concept of “Cloud” has matured over recent years.  Business cases will become more obvious and solutions better defined.   The enablers to address  “Big Data” opportunities will also become more obvious as we see the dynamics of tomorrow dramatically evolve compared to just a few years ago. 

The most influential contributors emerging on the horizon that will change the game are the following:

  •         Explosion of sensors and digital inputs
  •       Very large memory machines (Terabytes of memory and more)
  •       Easy access to thousands of processors from Cloud providers like Azure and Amazon
  •       HPC environments and Cloud operating systems that can organize computing across many independent machines
  •       The continued explosion of readily available data on the internet (e.g. weather, stock market, government records, news, social media)
  •       Web crawlers that can locate and index information on the web      Predictive Analytic engines
  •       Open Source software (e.g. Predictive Analytic engines, search engines, cloud operating systems, and I anticipate the emergence of applications like those that might be able to judge and predict sentiment about topics)
  •       Proven use cases by industry
  • .     Commercial-Off-The-Shelf (CoTS) package solutions connecting multiple data sources and answering specific business challenges e.g. sales forecasts, predictive maintenance, process control, inventory management.
This convergence of technology and information now make it possible to define sub-classes of Big Data projects, such as Demand Forecasting, Failure or Defect Management, Market Sentiment, Situational Alternative Simulation, and many others I am sure.

Big Data product segments will also be clearer and strong leaders will emerge.  One of the product segments where we will see specialization will be hardware platforms.  We will see specialized machines such as HANA platforms and Solid State storage optimized for very large data sets.   We will also see Big Data Cloud platforms leveraging the momentum of two major technology trends; Cloud and Big Data.   There will be many analytic platforms; both open source and proprietary.  We will see boutique solutions to specific problems like product launch predictive sales analytics and advanced maintenance management programs. 

The other big player to emerge will be taxonomy platforms that assist in connecting disparate data sources and applying context to support integration.   Today most major applications provide APIs to their applications.  I foresee them in the future also providing taxonomy context to allow more dynamic integration of systems.

So, what are some of the issues that this future of Big Data will bring upon us in the near term?  Organizational change management will be top of the list.   This emerging trend surrounding Big Data will cause many new roles, new skill demands, new systems, and new approaches to business. In short; change.

First of all, we will face the issue of where is our Master Data and who is the owner?  This is a problem that has been with us for some time, but now it will emerge as a very strategic decision and not just an IT annoyance.  Second will be the issue of data quality and how do we clean up all this conflicting or missing data we have uncovered.  Our list will continue to grow and here are a few more to consider:

  •       Development of Business Cases to clean up data
  •       Engaging the business in data governance
  •       New Technologies for solving this new class of Big Data business solutions
  •       New skills required and a shortage of top talent
  •       Technology and Business roadmaps – what does it look like?
  •       Many new vendors emerging and the old favorites trying new things they have never done before – how do we know who knows what they are doing?
  •       Project estimating and run away projects

We have seen many of these challenges before and many of us will see them again.  It will be exciting times and we better start preparing now.  The planets are aligning and Big Data is arriving.  It still may be a bit still too far away for some to see it clearly, but keep your eyes open for it will be upon you soon.

This guest post is from Francisco Maroto, the CEO and founder of OEIS consulting.He has a deep knowledge of Telecom, M2M,Internet of the Thing and Big Data business and solid technical background of the technology sector. He writes about how IOT and Big data can be used to improve our lives in all aspects.

The world needs IoT and Big Data to be transformed from Reactive to Preventive
We are living in a world of services run by an inducted consumption. The technological advances that receive most media attention are those that are aimed at consumers’ leisure. Big Data and the Internet of Things (IoT) get more attention when justifying their value based on financial or marketing criteria such as user experience or increase sales than when used to predict situations that have not happened yet.
Gary Atkinson, Director of Emerging Technologies at ARM, during The Connected Conference, the first international conference dedicated to the Internet of Things, discussed how IoT can help out the big challenges humanity faces.

Despite being too well known, it is important to remember again the main challenges that the planet is heading towards, and that IoT and Big Data can contribute to solve:
·         Food is running out of room.
·         Water is our rarest commodity.
·         Energy needs to be cheaper to be efficient.
·         Healthcare is a growing problem entailed by an ageing population.
·         Transport, everyone will be able to afford cars, but won’t be able to afford to park of fuel.

There are many other applications where IoT and Big Data technologies are very necessary but unfortunately some time it costs more to justify its implementation. The benefits are so obvious as needed but require a commitment and willingness of companies and governments think more to avoid risks of accidents or disasters that act as unfortunately occurred.

After the Fukushima nuclear disaster, some hackers designed customized Geiger counters that automatically updated radioactivity levels on an online map and people could see real-time radiation levels. A similar example emerged in earthquake-prone Chile, where a student designed a seismometer to tweet its readings. It quickly amassed more than 300,000 followers, who were grateful for the early alerts.

In a recent Fujitsu event, the company explain how the company is building a Predictive Alarm system using IoT and Big Data to predict off shore accidents, tsunamis and other natural disasters using marine sensor and data analytics.

Canada came up with another initiative. The project Smart Oceans BC will use sensors and data analytics to enhance environmental stewardship and public and marine safety. If will monitor vessel traffic, waves, currents and water quality in major shipping arteries and will include a system to predict the impact of off-shore earthquakes, tsunamis, storms surge, and underwater landslides.

We have to transform Organizations and Governments from reactive to preventive.  With the aid of asset intelligence and predictive maintenance analytics solutions, and a more predictive and proactive service model, organizations and Governments will not only be able to reduce their service cost, but deliver additional value to customers and citizens.

When I launched OIES Consulting  with the mission “To become one of the world's leading independent consulting companies, delivering top class M2M/Big Data services supporting innovations that improve the way people works and lives", I started my search of companies like Flutura with a vision to transform operational outcomes by monetizing machine data mostly in Oil and Gas and Utilities companies across the world.  

For instance, in industries like Oil & Gas, it’s a massive exercise in collaboration, team work, project management and oversight to ensure safe and smooth operations. Given that work is complicated and executed in highly hazardous conditions, there is a need to have better systems of risk assessment and safety management. And although comprehensive policies, procedures, and regulations exist to ensure things remain safe, the industry continues to experience fatalities, serious injuries, and other forms of damage to people, property, environment and reputation. There is significant headroom to adopt and benefit from Machine to Machine and Advanced Analytics of raw data from critical systems which affect safety than just safety statistics.

The benefits of implement Asset Management solutions that use cutting edge Big Data, M2M and Predictive analytics technologies are enormous to determine optimal asset availability and provide detailed predictions that avoid human and material losses.


Solving the big challenges that humanity faces needs the IoT, Big Data and Predictive Analytics.
For the first time in our history we will have the possibility of connect billions of sensors placed anywhere (in the atmosphere, in the middle of the oceans, in the cities, in the vehicles,..), to collect and analyze data in real time and provide more precise and accurate information that will allow predict natural disasters resulting in the saving of human lives.

For additional information please contact:

As IOT Data proliferates one of the game changing use cases which it enables is dynamic pricing. As assets get instrumented one can have usage based pricing of assets on lease. We are already seeing disruptions in pricing model in the automotive industry where sensor data which is a proxy for driving habits is fuelling usage based insurance premiums.

Flutura has been working with Utility companies which is one of the industries in Industrial Internet/IOT category undergoing fundamental shifts because of deregulation and increased instrumentation of the grid. Pricing in the utility industry is a very crucial lever and there is a lot of headroom for Utility companies to innovate on their pricing levers using data science. In this blog Flutura outlines 9 components in the pricing framework  out of which 5 analytical constructs can be used to dig deeper into pricing models. Dig deeper into the science of pricing using big data analytics can impacts multiple Utility outcomes like Profitability, Customer churn (now that Utility is getting deregulated) and Peak grid stress (influencing customers behaviour at peak time using pricing).


Based on the maturity of the market, the intended business outcome and the amount of deregulation in the distribution of energy a number of pricing models can be adopted. One could adopt TOU (Time of use) pricing which charges a premium for energy consumed at peak time. One could adopt UBP (Usage based pricing) model where based on the deviance from the median usage a customer could be charged a premium for the differential energy consumed. The construct to operationalize will have to be carefully chosen based on the factors outlined above. Some of the specific signals governing the choice of model can be the sensitivity of the neighbourhood to price changes, the sentiment of the end consumer to marginal price changes, the kind of segment the the customer belongs to and past knowledge bank of what stimuli worked for the consumer segment. Each of these are elaborated below.


Price elasticity process essentially answers a simple question – Is the neighbourhood responsive to price changes? Is Houston more sensitive to the$ 0.25 cent increase in unit energy pricing than Dallas? What is the % change in peak energy consumed in Palo Alto and its neighbourhood when there is a change in peak price? Do households in Austin change their energy consumption pattern in response to time of use pricing (TOU) or Usage based pricing or Location based pricing framework? Depending on the observed behaviour we can be tag that Austin as price sensitive neighbourhood using markers. Also if 2 neighbourhoods have similar characteristics, one can do what if scenario analysis to understand the potential reduction in energy consumed in that neighbourhood as a function of changed pricing

Analytical Construct-3: PRICING MODEL A/B TESTING

Let’s assume that there are 2 variants of energy pricing model and utility wants to test which of the pricing models has an impact on the intended energy consumption habit of the user. One is Time of use pricing where energy consumed at peak time , say 10-3 and the other is usage based pricing where we have differential charging for consumers who cross a certain threshold of energy irrespective of the time at which they consume. The utility wants to figure out version of pricing model is better. In order to do that the Utility subject both versions to experimentation simultaneously. In the end, they measure which pricing version was more successful and select that version for real-world use. A/B test is a perfect construct to quantify the impact pricing has on energy behaviour


A consumer can be segmented based on his/her behavioural profile using clustering algorithms like K means / neighbourhood. Depending upon the segments which emerge we can map a pricing model for each emergent behavioural segment and measure their response to that pricing stimuli


It’s very important to factor in consumer sentiments as a signal into the pricing model so that one is able to find the graceful balance between intended consumer behaviour at peak time and churn ( specifically as markets get deregulated and consumers have a choice ) . Two Key places where one can sense customers pulse for a pricing change are twitter feeds for real time expressions and call centre channels where representatives can dig deeper into their response

Flutura strongly feels that as the world around us gets digitized increasingly using sensors, IOT industries like Utility, Auto and Asset engineering firms will use real time data combined with advanced math as a strategic weapon to compete on pricing.  As Warren Buffet rightly said "Price is what you pay. Value is what you get."

Flutura is proud to present a series of guest blog posts by Industry stalwarts. 

This first one is by Clif Triplett . Cliff served as the first CIO for the 100 year old, $20 Billion revenue, global oil field services business Baker Hughes. Cliff also held several executive positions at Motorola, General Motors, and other Fortune 200 companies. Cliff also served the US Army between 1980-1990. He writes about Managing scale – a CIO’s dilemma.

One of the challenges most CIOs face today is the cyclic nature of business.  We as IT leaders must devise ways to be nimble and respond to the financial demands and pressures of meeting the expectations of the quarter.  To meet these demands we must plan and prepare.  A top priority is to drive more of our cost to a variable cost structure so that we have the levers we require to respond.    Next, we need to understand just how far we may need to move our cost position to meet the financial valleys demanded sometimes by business conditions.  If we want to be considered a valuable element of the business and a true business partner we must show we can respond to these market pressures and make the sacrifices or better yet, the actions necessary to achieve financial targets.  
What are some of the things we can do to meet this challenge?  First we need to instill in our project managers the practice of managing our projects to hit quarterly financial targets and quickly migrate to managing to the month.  Virtually all of our project and program managers manage to a budget, but perhaps not with the concept of being able to stop the plan on any month end and have delivered associated value.  Key to this begins with the design of the projects.  This is not to say we create a process to just stop the project on demand; it means we design our projects to deliver value on the quarter and can stop additional expenditures at that point if required without stranding past expenditures.   We generally as IT leaders can get at least a one month outlook on whether the business will need us to pull back or can stay on our business plan and need to have the insight and processes to cause our expenditure rate to dramatically shift. 

Part of a project design must be the planning of staffing and infrastructure requirements.   We need to establish contracts that allow us to take a portion of our staff and be able to draw back or even halt expenses on a very short notice; generally less than 30 days.  One of the easiest areas to pull back cost is in the area of development.  Projects should be designed in one month increments of capability that would be of value to the business.  If cost pressures necessitate a slow down, we halt the next unit of deployment.  Deployment is the next area where cost controls can be implemented.  If cost pressures demand it, we can delay deployment and all the associated expenses if we designed it and contracted in a manor to respond to such circumstances. 

Let’s review how this might be possible.  Contract the software license costs such that they are not activated until the system is moved into a production environment.  Second, the infrastructure you require to run the application can be purchased in a manner similar such that it is not paid for unless being used.  This is possible with strategies like Platform as a Service (PaaS) or Infrastructure as a Service (IaaS) or even Software as a Service (SaaS).  Any testing services and deployment services can significantly leverage outsourced services and therefore can also be pulled back on those portions of the system that has been placed under financial suspension. 
The IT market has shifted and it is now possible to buy most IT resources as services, and in a variable cost model.  We must learn how to take advantage of this and begin to migrate to these new financial service models so we can meet the financial challenges placed upon the business. 

Now the previous is scenario is what most of us have facd, but another scenario faces us as well – success.   What if our project is highly successful or popular and the business wants or needs us to go faster;  and what if cost and budget are not an issue?  Business leadership generally does not understand the concept that money can’t fix most things.  I believe it is in our best interest to be able to accept money and be able to accelerate to meet business demand.  When IT says it can’t deliver regardless of cost, this is a major missed opportunity.  This scenario is similar to the challenge of being asked to scale back, but it is one of scaling up to meet demands.   Once again to meet this challenge, one must prepare.  We must plan for success and that the demand for our services will exceed our current plan and available resources.  How do we meet this challenge?  We need to attempt to put in place contracts and processes that allow us to double our capacity in two weeks.  This seems perhaps a bit odd to many IT teams, but whether you exercise this capability or not, it will have a secondary impact of improving service delivery.  To double your capacity, you must have well defined processes, process discipline and a very engaged and open communication process with the key IT suppliers.

The challenge can be met.  You can design your organization to scale back or scale up rapidly, but it does take planning.  The IT services market has now provided us new tools at our disposal to meet these new demands we need to be able to react to.  If you have not begun to prepare, it is now time.  Failure to be nimble can cause great harm to the reputation of the IT organization and the level of partnership you will be able to earn and develop with the senior management of the business.


Follow by Email


Flutura is a niche Big data analytics solutions company based out of Palo Alto (Development Centre in Bangalore) with a vision to transform operational outcomes by monetizing machine data.

Flutura is funded by Silicon Valley’s leading VC fund The Hive (based in Palo Alto) which primarily invests in big data companies worldwide.

Contact Us


Email *

Message *