You are Home   »   News   »   View Article

A structured approach to big data

Thursday, March 13, 2014

Big data, defined as data which is too large to work with manually, needs a structure. Andy Coward of P2 Energy Solutions explains how to build a big data management system for oil and gas.

'Data to desktop' was once a phrase that people aspired to. But the problem is that the data available at a desktop has escalated to the 'big data' category.

This has moved the problem from the lack of visibility into the realms of information being buried in mountains of data.

This problem will only grow as automation and standards for data access become common.

The installation of field instruments using Foundation Fieldbus, Profibus or HART protocols and the use of wireless instruments means that more data is available, and staff are all expected to utilise this data to show benefits in their work processes.

How can this be achieved across disciplines, geographic locations and company hierarchies?

Contextualise the data

One of the main problems with increasing data volumes is understanding the context of a piece of data.

The source of a piece of data may be an instrument given a moniker such as 24_07_VE_750.PV and this instrument may provide an analogue signal between 0 and 5.

How does this help someone to understand what this signal is, what it means in context to the piece of equipment and then in relation to 'normal' operation on this equipment?

If this piece of data is associated to a physical asset (Well 7 Electrical Submersible Pump for example) and the signal is then shown to be the vibration of the motor, then this immediately has more value because it has contextualized the data.

Building on this example, linking the various data sources to the assets requires a dictionary as a fundamental basis. This dictionary will describe the source of the data or, put another way, contain the ontology of the data, and will then help make the data available to be placed in context to a piece of equipment or a physical asset.

Associating data to an asset requires that all of the process equipment is built up in an asset and reference model, so that assets are built up in a hierarchy from the lowest level (a well for example) rolling up to a gathering centre, then to a production centre and then an entire field.

This allows the enterprise to be highlighted at a high level but investigated right down to individual equipment.

There is now a structure where data can be associated and contextualized in relation to assets and pieces of equipment.

The next question is then - what data should be associated to a piece of equipment? Should this be a maintenance procedure, the costs and history of spares for the equipment, operating pressure, temperature and flows, recent maintenance history or design documents?

All of these are sources of data for the asset and the simple answer is that all of them should be associated and visible to the user, depending upon the user's requirements.

Federating the data

The challenge is extracting data from all of the different sources and then displaying this in the user -selected format, without duplicating the data.

The aim here is to extract value from existing data warehouses and repositories, not extract data and then create a copy.

We can start to extract information from the 'big data' by performing calculations upon the data and then displaying these results to the organisation. These calculations could be the efficiency of a compressor or the performance of a pump against design conditions.

Through simple calculations, combining the data from different sources and providing more detailed information and diagnostics on the system, we start to extract the value from the data sources.

Triggering actions

When calculations or specific conditions with the data are met, how is action triggered?

The process needs to start a workflow, inform people of situations or even simply send a text message and enable the process to be tracked. This simple system is the first step in turning data into information and then information into action.

For example, if a compressor is operating outside of its design conditions then this will have the effect of shortening its run length. Informing people in maintenance and operations that this event has happened enables people from different disciplines to take action to determine if the shortened run-length is warranted by increased production, or if it is a case that requires operating procedures or training to be updated.


How will data be displayed? There are many choices here -mimics from the control system, tree maps, dial displays, bar charts, the colours that will be used (muted greys to enhance observation of abnormal states or the full range of colours available) and how navigation will be handled.

Developing and defining a display standard, a display hierarchy, data visualisation standards and ensuring that this is fit for purpose from boardroom to control room may seem trivial on paper, but is hugely important.

Domain knowledge

The final step in this route to informed employees is the provision of domain knowledge. The data should be available to applications or components which provide insight into the specifics of the process or operation that the company is involved in.

For example, if production from a platform is available to the operations team, it can then be verified and approved. This verified production volume should then be available to packages and workflows that handle the reserves management, the reservoir simulation and should be associated with information on well testing so that allocations can be made.

Associated Companies
» P2 Energy Solutions
comments powered by Disqus


To attend our free events, receive our newsletter, and receive the free colour Digital Energy Journal.


The people behind digitisation: Competencies required to make it work
Dr Jeff Bannister
from Orbitage on behalf of MOGSC


Latest Edition Feb-March 2018
Feb 2018

Download latest and back issues


Learn more about supporting Digital Energy Journal