You are Home   »   News   »   View Article

Can we run the E&P workflow through a browser?

Monday, June 11, 2012

Oil and gas IT could be much easier if all data and application functionality was on a centralised database and computing platform accessed via a web browser, instead of using multiple PC software tools. Dr Duncan Irving of Teradata asks if it can be done.


If the oil and gas industry was starting its IT from scratch, it would probably choose to do it like many other industries do it, with all data and software being accessed using web browsers, and all data running in a central database, says Duncan Irving, EMEA oil and gas industry consultant for data warehousing company Teradata, speaking at the Digital Energy Journal March 13th Aberdeen conference, 'Developments with subsurface data'.

No complex PC applications, no subsurface 'projects', just one large database which all the company's subsurface, surface and sensor information was kept in, which people work directly with.

So geoscientists and engineers would never have to move data between systems and reformat it. There wouldn't be problems of multiple data in different places about the same field.

The database would be run according to standard computer science tenets of how to run a database, with long term stewardship, good data governance, and records of which people did what, at which time and with which version of the data.

'That would be a good place to be,' he said.

But the oil and gas industry, and technology itself, have a long way to go before something like this can work.

Data transfer rates have not increased as fast as other aspects of IT, such has CPU speed and hard drive space - so you can't move the data around fast enough.

A seismic survey was acquired last year which was 1.7 petabytes; and one oil major has noted that a CT scan of 1000 m of core would generate an Exabyte of data, which is too large for any current platform to process usefully.

You would need to get all of the oil and gas industry's existing data in a database, which can cope with the data volumes involved, with data in a suitable structure to perform current analytical calculations on it.

But there are technologies and services available to help do this, including better web technologies, faster processers, and data storage devices which can serve up the data quickly, and analytical tools. 'But the challenge is putting all of that together,' he said.

Another obstacle to doing this is the organisational tension - including between enterprise architects who understand a lot about IT at the corporate scale, but 'are not really sure what the guy in the heavy metal T-shirt is talking about,' he said.

Some people understand the technology, but just don't manage to make it work in their company.

'No-one as far as I know in oil and gas has put all these pieces together to allow you to do all the computations and get the questions answered in the timeframe you need to do it in,' he said.

'Our vision is that from your web browser you perform all of these different analytical activities. It doesn't matter where the CPU cycle is, as long as you interact with it in a seamless rapid fashion, that's all you really care about,' he said.

'As long as you're getting the job done, it's a lot more collaborative if you're using the same data.'

Integrating

The industry benefits of everybody working on a common database would be much larger than the benefits people have got from visualisation systems.

'Collaborative visualisation doesn't share the data or insight. It just shares the images, in a very expensive cave.'

Integrating the data would also yield much bigger benefits than are available from sharing workstation capacity, he said.

Putting workstations in server farms rather than on people's desks is 'not really integrating data at all, that's just helping out your local hardware vendor with their annual revenues,' he said. 'It doesn't get away from the issue of having all these proprietary file formats and data types.'

If data is all stored in lots of separate files or projects, there is a loss of knowledge every time data is transferred from one system to another, he said.

Working with your data

Oil majors vary in how much they want to work with the data they have stored. Some supermajors have a lot of their decision making architecture and modelling directly built on their data. Others just store lots of data but don't do much with it afterwards.

To get to know what value lies in data, Dr Irving suggests using Hadoop programming framework to re-purpose incoming data (streaming seismic and well sensor feeds). Hadoop is the open source Apache foundation project originally launched as MapReduce by Google in 2004.
Hadoop is great for finding patterns in the data, extracting features and resorting for another purpose.

Google uses MapReduce in its search engine, running many different algorithms on data which is stored in different data stores, to deliver you a list of all the web pages with a certain term in them, structured in an order so that the page it thinks you are most likely to want is at the top.

'Parallel systems are very good at this, they have very good indexing systems, they know how a database is structure, and they will present you with an answer very quickly,' he said.

'It allows you to get all of this and put it in some sort of order, or get all of this and put it somewhere else.'

Dr Irving said that he previously used MapReduce on seismic data, when he was working as a geoscientist before joining Teradata.

'It's something that the community are building, it's not enterprise grade yet. But if you've got some very clever system administrators, computer architects, they could build you something like this,' he said.

'You could do a lot of mucking about with your prestack gathers and derive some sort of insight from them that you couldn't get from the commercial offerings with the same sort of flexibility.'

In its Aster platform Teradata puts a SQL layer underneath MapReduce, so it can draw data directly out of the databases.

So the computer can look for all kinds of patterns in the data, such as spotting that a certain behaviour is seen in the surface compressors, when there is a certain reading from the wells, and a certain subsurface rock structure.

'It allows you to have a good level of insight from your data, by trying out lots of things and excluding some hypotheses.'

'When I'm doing reservoir modelling, I can put a neural network on it,
It performs all the pattern matching on the different reservoir models, things might be interesting.'

You can search all of your seismic data to look for a trace which has a certain shape.

The analytics can bring in statistical measures. For example, if you want to look for something which looks a bit like something else, (for example, find me all the flow sands), the analytics can judge the likelihood that a certain formation is a flow sand, and show you all the areas it thinks are most likely to be flow sands.

The same database can also act as the company's master data store.

You can build workflows on it, showing what you want people to do every day, and have alerts if something wasn't done.

'It is not that difficult, it is just data modelling,' he said.

You can have a single data system with covers everything from permanent seismic monitoring, to fraccing, up to hydrocarbon accounting. 'We've done this,' he said.

For one oil company, Teradata is designing a completely new data model to hold the data - although the oil and gas industry has the PPDM data model available to members of PPDM, and the Schlumberger SeaBed data model.

Why is it hard?

So why does the industry struggle so much to put all of its data in one place?

'I think the industry has always been a bit too conservative and has always been reliant on vendors that have not been looking out at what computer science can offer, especially in data management,' Dr Irving said.

'We have brilliant visualisation, brilliant application development, but we're not good at working with large volumes of data.'

'The super crunching data community gets it, but for everyone else this has been quite a new thing for the last 10 years or so.'

'There are better ways of storing seismic data, instead of storing it in SEG-Y, a tape format. Is that the way we should be storing seismic data? It's an archive and transfer format - not something for complex analytics'


Comparison with retail

Mr Irving said that the oil and gas industry has much more complex data and expertise demands than other industries Teradata works in.

'We do a lot of business intelligence work in different vertical industry sectors, such as retail,' he said. 'Retail is quite simple compared to this.'

But the oil and gas tools are not really business intelligence tools, because they require domain expertise. 'They're built by experts and driven by experts,' he said.

'The clever bit in the software is the algorithms. The graphical interface, buttons and widgets, that's not so special.'

[Other industries] 'don't have the rich applications that we have up here - but they have similar levels of sophistication in the workflow,' he said. 'They are probably 2 years ahead.'

Summary

In summary, Mr Irving presented a new ecosystem whereby MapReduce technologies are used to develop an understanding of data as it enters a workflow.

Once structure and relationships within the data are crystallised, more analytically performant platforms take over. For knowledge generation this could be Teradata's Aster platform and for decision support and predictive analytics such as in Integrated Field Operations, this could be the standard Teradata data warehouse.

All data and CPUs are co-located and much work has already been carried out in exposing this storage and processing resource to common industry software APIs and algorithms.

The vision of interacting with all this data and functionality through web tools is very close.



Associated Companies
» Teradata


comments powered by Disqus

CREATE A FREE MEMBERSHIP

To attend our free events, receive our newsletter, and receive the free colour Digital Energy Journal.

DIGITAL ENERGY JOURNAL

Latest Edition Aug-Sept 23
Sep 2023

Download latest and back issues

COMPANIES SUPPORTING ONE OR MORE DIGITAL ENERGY JOURNAL EVENTS INCLUDE

Learn more about supporting Digital Energy Journal