The benefits of all of this are that people can get a much more direct understanding of what is happening in the reservoir, and get better quality data streams to export into reservoir models.
And if the data is more trusted, more people in the organisation will use it.
The technology has been developed outside the oil and gas industry for the past 20 years. It is used in many other industries, but IO-hub has exclusivity to sell it in the energy industry.
IO-hub thinks that the service should be particularly useful for distributed temperature sensing (DTS), pressure / temperature and flow data, data from electrical submersible pumps and drilling data.
The raw data is sent to IO-hub’s service, and a “validated” data stream, with additional information, comes back together with a trend analysis and decision support tools. IO-hub also offers optional services to store the data, and provide notifications about exceptions in the data stream.
IO-hub is currently working with an (unnamed) oil and gas company to get everything working.
The company CEO is Philippe Flichy, who was previously VP business development with Merrick Systems, and a digital solution manager at Schlumberger. He was also intranet manager for the 2002 Salt Lake City Winter Olympics.
The CFO is Robert Flavin, previously CFO of Sequent Energy; the CTO is Thomas Lovell, previously a Senior Engineer with Foster Miller. The Director of Applied Technology is Rick Mauro, a Director at Endeavor Management, formerly with Landmark and Mobil.

AlgorithmsThe algorithms look at the level of “'information entropy" or disorder in the information.
This means that when trying to work out what is actually happening (when presented with a series of readings), the computer can work out a trend which fits the points according to their complexity, rather than an average of the points.
Usual streams of data from oilfield sensors contain outliers, where, for a short time, the sensor reading spikes. A decision needs to be made about if this indicates that something is wrong in the well, or if it is a short term problem with the sensor, or if the apparent outlier represents a real event.
Another common problem is if a sensor gradually loses calibration, which means all the readings from it start to drift.
By calculating data as a multidimensional histogram, you can get an understanding of what is happening – for example if the histogram “data-cloud” gradually moves over time, that may indicate sensor drift; but a change in shape of the blob can indicate something different happening in the reservoir.
You can tell if ‘subtle changes’ in the data indicates something important (or something big about to happen), or if those changes are just a continuation of the noise.
It gets more interesting when you analyse the level and specific mix of complexity in the data. An increase in data complexity isn’t necessarily obvious from looking at the raw data.
For example, the data complexity could change significantly in areas where the raw data amplitudes are low and apparently insignificant.
The algorithm does not follow any specific rules or need pre-defined models - it just aims to provide you with useful information about what is happening in the data, by measuring the complexity of the data and how that is changing. IO-hub is now looking at aggregating many real time data streams to identify correlations between signals.