You are Home   »   News   »   View Article

Cognite - a business putting data in context

Friday, October 30, 2020

Cognite, based in Oslo, has built a business putting oil and gas data in context - working closely with Aker BP - starting with production, now moving to subsurface and drilling data.

Cognite, based in Oslo, has built a business putting oil and gas data in context, starting with production data and now moving to subsurface and drilling data.

Cognite works closely with oil and gas operator Aker BP. The two companies have a common shareholder, Norwegian engineering giant Aker, which owns 62 per cent of Cognite and 49 per cent of Aker BP.

Aker BP makes good use of Cognite tools for its own purposes, and the two companies have a common vision about how they want to make better use of digital technology.

The core philosophies could be described as 'liberating' and 'contextualising' data. Cognite defines liberating data as taking data out of the proprietary data formats and software systems which restrict its wider use, and putting it into standard formats and structures.

'Contextualising data' means connecting data together in a way which makes sense for others.

The term 'contextualising' probably needs more explaining. Perhaps this analogy is helpful. Chips, tartar sauce and deep fried fish, by themselves, don't mean very much. But if you put them together you get a classic British dish which means a lot to British people.

Similarly, there's a limit to what you can do with seismic data and well log data by themselves, but if you connect them together in the right way, a geophysicist can get great insights into what is happening in the subsurface.

Cognite has built a business offering around the process of liberating and contextualising customer data, finding out what domain experts need and giving it to them, and achieving concrete results for customers, such as a reduction in specific costs or time.

Cognite's approach to liberating and contextualising data could be seen as having multiple horizontal layers - the raw data sources, the data integration layer, the data contextualisation layer, the data 'enablement layer', doing processes such as data engineering, machine learning and low coding, and the top layer, uncovering data for every application via APIs, (application programming interfaces, the means by which different software systems can be connected).

The business offering is packaged as 'software as a service'- software which is made available by subscription via cloud hosting.

The company has grown from 34 employees in Oslo in 2017 to over 300 employees now, based in Oslo, Stavanger, Austin, Houston, Palo Alto, Tokyo, Vienna, Milan, Helsinki and Dhahran.

More on contextualisation

Just like in real life, 'context' for data is not a single thing, it depends on the beholder or the person using the data. Different domain experts might want to see the same data placed in a different context, based on the job they do.

For example many different departments of oil and gas companies want to know production rates, but they use the data for different purposes - predicting cashflows, reporting finances, or establishing if there is a problem with the well. Each of these departments 'contextualises' the production data in a different way.

For contextualising large amounts of varied oil and gas data, we need a multi-layered process, where 'raw' data (which could be anything from live sensor data to documents from the corporate archive), after being 'liberated', is placed into simple intermediate data models, and then built up into more complex models which give specific domain experts what they need, contextualised in different ways.

The intermediate data models can combine data from multiple sources in different ways, such as putting data together based on geophysical relationships, or connecting data based on a knowledge graph or entity map of how different pieces of information relate.

The core building block of the contextualisation process could be described as 'simple data models'. If the data model is simple enough, you should be able to re-use it in multiple places.

A data model could be re-used across different industries - for example a data model for a turbine developed for oil and gas could be used in power generation.

If it is possible to re-use data models or components of them, it can make the process of building a data contextualisation 'system' for a certain industrial process much faster.

Some data already arrives contextualised to a certain degree, for example an old reservoir model is itself a large amount of contextualised data. The sensor data from a piece of equipment, such as a gas turbine, may arrive with multiple streams integrated together, which is a form of contextualisation. There is no need to strip away all of the context before you start putting it back again.

Systems for handling live or 'operational' data, such as sensor streams, are different to systems for handling other types of data, such as documents, models and data archives, which Cognite calls 'planning' data.

But the same philosophy of data handling can work for both - you need to find ways of getting data from where it is stored or generated, to a format where people can use it to make better decisions.

In this definition, in upstream oil and gas, production data is largely operational data, subsurface data is more planning data, and drilling data is a mixture of both.

Expose to other systems

Cognite does not need to make all of the actual 'apps' which people use to do their work on. It can also provide a useful service 'exposing' contextualised data to other apps, using standard APIs.

It works with some specialist analytics companies, which use data science to try to find interesting insights from the data.

There are also business opportunities for small companies who want to build tools to work with the data to provide useful services.

Cognite also encourages its customers to develop their own applications using the data.

Not many people in a company want to work with raw data itself - but they are happy to work with tools which make it easy to work with.

Subsurface data

Subsurface data can be contextualised in different ways. The obvious way to do it is by grid co-ordinates, so everything we know about the subsurface is mapped to a certain grid position.

But this method does not work for all subsurface data, because not all subsurface data has precise geographical co-ordinates - such as electromagnetic survey data, or gravity.

Digital Energy Journal spoke to Dr Carlo Caso, senior director of product management - subsurface and drilling, with Cognite, who has a role overseeing the strategy and 'roadmaps' of the Cognite Data Fusion core components and applications, covering exploration, field development and drilling.

He has a PhD in geology, and then worked for Schlumberger for 7 years in subsurface and drilling software technology, including working with Petrel, probably the most used oil and gas geoscience software.

'There are many different ways to contextualise the data,' he says. The challenge for Cognite is to 'provide the standard tools to relate the data.'

One particular challenge, he says, is finding ways to work with software models which were created using old software versions. You need to either find the legacy software which was used to create the files, or find a way to work with the old code directly.

There have been a number of efforts to build 'port to port' connections between different subsurface applications, connecting one system with another. 'This is extremely painful, I don't recall any successful project,' he says. Different software packages follow different logic in how they handle data, and it can be very hard to work out what that logic is.

Instead, following the philosophy outlined above, you can try to abstract the useful elements out of the data, and put them in context, to create a 'data fusion layer'. Then this data fusion layer can be used as a basis for re-using the data somewhere else much more easily.

Cognite is part of the 'Open Subsurface Data Universe (OSDU)' project, set up by a number of oil majors to define a standard framework for subsurface data.

Dr Caso sees OSDU as complimentary to what Cognite is doing. 'We are aligned with the philosophy and objective,' he says. The APIs which Cognite makes, to allow other systems to integrate, can work together with OSDU's framework.

Similarly, Dr Caso sees the Energistics RESQML standard, as standard way to store reservoir model data, as very helpful to what Cognite is aiming to do - because it puts data in a format which makes it easy for other systems to work with it.

Top down approach

The approach of building data liberation / contextualisation 'systems' could be described as one which is developed both 'bottom up' and 'top down.' Both need to happen at the same time.

The 'top down' approach involves talking to company domain experts about how they work, what data they use for their work (whether analysis or decision making), and then building models which put together data in this way, perhaps automating some of their work.

For example, you can talk to an expert in rotating machinery to find out what data they find helpful from the machinery in telling them if any faults are emerging, and then see if you can build systems to put that data together for them.

Ideally you can have a feedback loop in place, where you are continually improving the data collection and the algorithm, to give more to the domain expert.

'It is not easy or done in two days, but at least its scaleable,' says Dr Caso. Once it is done, it can be used many times over.

In some cases, the 'top down' ways of working with the data would be defined by the client themselves. For example, if a company wanted to build a warning system which would tell crew on a drilling rig that they need to evacuate.

The drilling company would need to very carefully define what data 'picture' would lead the computer system to issue this instruction and how exactly it would work, because it would be very expensive if the warning system gave bad advice - telling people they did not need to evacuate when something dangerous was happening, or telling people to evacuate when the situation was safe.

To take some learnings from the Deepwater Horizon disaster, there was a complex data picture relating to drilling mud loss which the crew mis-interpreted. It may be possible to develop a digital system which could put all the relevant data in context and then give the right advice.

Many of the difficulties encountered are usually relating to data quality, or data 'wrangling' - putting data into the format you need. 'If we go to the root of this we can build a system that can handle it,' Dr Caso says.

If you just contextualise data bottom up, you may end up with data put together in ways which are no use to domain experts. But if you only do it top down, you may find yourself trying to build tools from data which is not available.

Bottom up - data modelling and prototyping

The 'bottom up' approach can be described more technically as 'data modelling and prototyping', taking available data, finding ways to contextualise it so it is useful for how people want to work with it.

Cognite's process involves gradually exploring what can be built, seeing what data could be ingested into a system, making models, and finally connecting it to live data and deploying it.

There is machine-learning powered inference that suggests how data should fit together, which can then be validated by a human being (typically Cognite can match from 70 to 99% of data automatically based on pre-trained algorithms, the company says).

Cognite has developed a number of small tools or 'microservices' to improve the data.

For example, there are tools to match together data with different time series. One data stream may give data once per hour, another has data every minute. The gaps can be filled by interpolating.

Associated Companies
» Cognite
comments powered by Disqus


To attend our free events, receive our newsletter, and receive the free colour Digital Energy Journal.


Latest Edition Oct-Nov 2020
Oct 2020

Download latest and back issues


Learn more about supporting Digital Energy Journal