You are Home   »   News   »   View Article

Petronas - strategies for improving data

Wednesday, May 17, 2017

How can oil companies try to reduce the amount of time oil company staff feel they need to spend checking data? PETRONAS is developing strategies to try to fix it.

We all know the problem - that oil and gas technical people are spending about half of their time doing 'data collection' - trying to find, check and integrate together the data they need to do their analysis.

Malaysian oil company PETRONAS is developing strategies for trying to fix this.

The reason data collection takes so much time is that the data needs to be brought from many different sources, and everyone who works with it spends time re-checking it, because the level of quality is mostly not known.

Trusted data

Philip Lesslar, principal consultant, technical assurance, Group Technical Data, PETRONAS, talked about a PETRONAS project aimed at delivering 'trusted data' and elaborated on the steps required to achieve this.

He was speaking at the Digital Energy Journal conference in KL on October 5, 'Connecting Subsurface, Drilling Expertise with Digital Technology'

The project team looked at 10 workflows in detail, and looked at four key data types - well headers, check shots (bore hole seismic surveys), deviation and basic logs.

For these data types alone, the company calculated non-productive time of 58 man years a year on data searching, collation and checking, Mr Lesslar said.

To avoid this work being needed, employees must be able to access a single answer for each data type, or a 'single source of truth'. 'That's easy to say but quite hard to get,' he said.

For example, a well header has 26 mandatory data types in it, providing basic data about the well,
such as its total depth.

The usual way to fix it is to set up a data quality checking and improvement project. Typically, when you start, there is a lot of enthusiasm, but it starts to wane after a number of months. 'After 1 year [enthusiasm] has shifted to another direction,' he said. 'That's just human.'

PETRONAS' new approach is achieving trusted data starts with looking at data types as basic building blocks and for each data type, there are 5 steps, which are, i) identification of an official corporate database for a data type, ii) identification of the data custodian within the technical data department, iii) official nomination of a subject matter expert to define the business rules around the data type, iv) development of a QC workflow and v) capture of all documents in an audit trail.

'It brings all interesting challenges,' he says.

He also explained a classification of data being used that divides data into primary data and secondary data.

Primary data can include the raw recorded seismic and well logs, and the reference data and master data which goes with it.

The 'secondary data' can include all of the data interpretations, such as processed seismic and horizons, and then all the data collections, such as composite well logs, and project archives.

'If we go into a room and just talk data without specifying it - people get into a twist,' he said. 'I like to break it down.'

'We look at it as building blocks, each data type is a building block,' he says. 'We have 91 data types that we're zooming in on.'

Knowing the quality

Knowing the quality of data is not the same as having all data of perfect quality, he said. But if you know the quality of the data you are working with, you don' need to spend more time quality checking it, so that is a useful goal.

As a step in this direction, PETRONAS is building a software application which can automatically check that the data is complete. For example, it can check that every well header includes data for total depth. It can give the data a traffic light easy indication about how complete the data seems to be. It can also show if the situation is improving.

You can make other automatic checks, for example if the data is consistent, and if there are duplicates.

Altogether, about 70 to 90 per cent of data quality issues can be checked automatically, Mr Lesslar said.

The automated data quality management program runs 1000 queries against 28 databases every day. There is a question of how many checks really ought to be done - but it might be worth doing millions of automated checks continuously, he said.

The other 10 to 30 per cent of checking can be done by a human subject matter expert. For example, verifying if any missing data is important, giving it a quality 'indicator', and checking the reference values.

'At the end of this, we want fit for purpose data with a known quality we can use straight out of the box,' he says.

PETRONAS plans to make sure all of its corporate databases are addressed in this way.

There also needs to be work to prioritise which wells to start off with. The PETRONAS team puts together a 'master priority well list', of the most important wells.

This priority list is based on an understanding of what projects company staff are working on, or have coming up. 'From that we extract the master list that we focus on,' he said.

Subject matter experts also need to be involved in explaining which data types are most important and which should be priorities. For example, you might know what, if you are trying to do a certain workflow, it will 'just fall over' if a certain piece of data is absent, he says.

At the end, PETRONAS should have a 'Quality Data Inventory', a section of data which is quality checked to determine that it is fit for purpose, and ready to be used for business activities.

There is also a software tool which will tell you how far the project has got with checking data.
' That's important, that kind of feedback,' he said.

All of this information is clearly shown in PETRONAS's 'QDI Dashboard', where data types and progress bars for each data type depict fill levels showing how much of it could be described as 'quality checked and fit for purpose'. This will bring together data from both the automated and human checks. A single score shows the collective total percentage of 'trusted' data.

Over time, more and more different types can be included, and the scores should get better.

The systems need to be credible, ie people need to believe that when the system says that data is fit for purpose, it actually is.

And while data quality management programs can go a long way to improving data quality, it doesn't actually ensure that the data is trusted, he said. 'Those who work in data management for a long time know the struggle that we go through.'

There is a further challenge making sure it is possible to easily pull the data into the various software applications and integrating it together, but getting quality available data is a useful step to make. 'Let's make sure the basic data is right before we get to the more complex stuff,' he said.

A further issue is tracking the benefits of the project. A lot of effort goes into achieving trusted data, and people want to see that they are getting something back from the effort, he said

Associated Companies
comments powered by Disqus


To attend our free events, receive our newsletter, and receive the free colour Digital Energy Journal.


Latest Edition Mar-Apr 2024
Apr 2024

Download latest and back issues


Learn more about supporting Digital Energy Journal