You are Home   »   News   »   View Article

Data professionals working alongside engineers

Friday, June 30, 2017

Instead of having a field development team do all the data preparation work themselves, you could have the field development team work together with data management specialists and get it done much faster. Chew Wei Liang from PETRONAS explained how it is done.

As an experiment, PETRONAS had a specialist data management team work together with the field development team when preparing the data for a new field development project.

They found that with this 'optimised data' approach, it was possible to reduce the time taken by about 33 per cent, said Chew Wei Liang, executive in technical assurance, Group Technical Data, with PETRONAS.
Mr Chew presented the project at Digital Energy Journal's Kuala Lumpur conference in October 2016.

The task being addressed was getting all of the data ready for a field development project to start.

This task is usually undertaken by the field development team themselves, and usually takes about 226 man days to complete, with a team involved, running over about 3 months (so approximately 4 members of staff working on it).

The work involves collecting, quality checking and assuring the data, with the most important data types signed off by the relevant subject matter experts at the end.

The experiment looked at both a greenfield and brownfield development project.

The data management staff started with a 'data type' analysis, to identify the most important data types and workflows used by each discipline, and to analyse and prioritise them. They wanted to understand what data goes into each workflow and what data comes out.

The project covered workflows in the geophysics, geology, petrophysics and production technology departments.

There was a 'data availability' analysis to collate the data which was already available, and identify data types which did not have a corporate data bank, for example if it is taken from reports or spreadsheets.

The 'data QC' step involved having the relevant subject matter expert quality check the collated data for both projects.

The 'data accessibility' step was to make sure the data could be delivered to the users effectively, and they would know the quality of it.

The final 'data assurance' step was to ensure that the quality control process had been adhered to, and there was an audit trail.

Once all the methodologies were put together, the project team tried to quantify the time savings achieved.

With the 'optimised data' approach, the field development team spent 80 days getting data prepared, and the data team spent 54.5 days finding data and quality controlling it, with a further 17 days input from the project team. So altogether it took 151.5 days of work.

This compares to 226 man days of work previously when the field development team were doing the data preparation work themselves. So a saving of 74 days, or about 33 per cent.

There are benefits from having less bad data, the right data readily available, higher accuracy, more data re-usability, less repetitive data quality control work, less data downtime, among others.

There were non-quantitative benefits to the dataflow optimisation, including the final quality controlled data is unique, with no further clarification needed; there is better access to the data; and there is a better collaboration between the project and technical teams.

Now, most of the data is available from the same source, and the same source is accessed by everyone, Mr Chew said.

The data flow optimisation will now be used on a further 7 projects. If there are savings of 75 man days on each of them, that adds up to 525 days, or about 2 man years.

The project is continuing, with efforts to develop new data quality control processes, for more data types, and track more data types, and identify data where there is no database, Mr Chew said.

Understanding workflows

The data flow optimisation process is basically about understanding how data flows around the company - and then making sure the most important data is readily available in a format which can be worked with, Mr Chew said.

Data flows between processes, applications, within people who work in a specific domain, and to other domains. For example the way that geophysicists work with seismic data.

Data flows into a 'workflow' and onto another 'workflow' (if the workflow is the work done by a certain individual).

You could have a master data management database and take data from that into a derived database.

One typical workflow is tying well data to seismic data. In this workflow, based on staff interviews by Mr Chew's team, about 50 per cent of the work is around getting the data.

Of this, 3.5 per cent is gathering requirements, 26.5 per cent of the work is requesting and searching for the relevant data. The final 20 per cent is quality checking the data, including well headers, check shots.

The other 50 per cent of the work is interpreting the data, where the actual value is created.

The deliverables can include synthetic curves, and new time-depth relationships.



Associated Companies
» PETRONAS
comments powered by Disqus

CREATE A FREE MEMBERSHIP

To attend our free events, receive our newsletter, and receive the free colour Digital Energy Journal.

DIGITAL ENERGY JOURNAL

Latest Edition May-July 2023
Jun 2023

Download latest and back issues

COMPANIES SUPPORTING ONE OR MORE DIGITAL ENERGY JOURNAL EVENTS INCLUDE

Learn more about supporting Digital Energy Journal