You are Home   »   News   »   View Article

OAG Analytics - analytic workflows for specific tasks in unconventionals

Friday, June 24, 2016

OAG Analytics of Austin, TX has developed a number of ‘workflows’ for oil companies that can help them achieve a specific objective from their data, for example value an area of interest or optimize the completion design for a new well.

OAG Analytics of Austin, Texas, has developed analytics workflows to extract meaning from complex data, which can help companies improve core planning functions like optimizing proppant in an unconventional well and rapidly understand the value of their acreage.

In less than one hour OAG’s Insights Workflow helped a firm identify an opportunity to save $400,000 per well by using less proppant to achieve virtually the same level of production, said Luther Birdzell, founder and CEO of Houston based OAG Analytics. This insight alone would produce a material return on investment in OAG’s software for this customer.

He was speaking at the Finding Petroleum forum in London on April 18, “Transforming Subsurface Science”.

The company has also developed workflows which can help companies value acreage in 4 hours, where it previously took 5 days.

It can take as little as an hour for an oil company to get value from the software, analysing its own data, in combination with public data and subscription data, he said.

OAG Analytics is helping companies do automated data management, particularly regularly updated data, such as production logs and pressure data.

Mr Birdzell has a background in electrical engineering, and previously worked in the software industry, helping companies optimise corporate IT.

He started Oil and Gas Analytics in 2014. OAG started building a customer base with onshore North America and has expanded to international onshore and offshore.


Management, analysis and activation

To get value from analytics, you need good data management, data analysis, and ‘analysis activation’ – doing something with the results of your analysis, he said.

Your data management strategy must include providing access to the data, managing the quality of the data, and data storage. To scale, firms usually need more automated data quality control than they typically have, he said.

Right-sized data analysis, includes analytic techniques with the minimum level of ‘complexity’ required to extract reliable signals from the data. If you just have a big cloud of complex raw data, with weak correlations between different aspects, it is very difficult to use it to reduce costs and increase profitability, he said.

Then the analysis needs to be used together with appropriate software, so it can deliver value to the company decision makers.



High and low consequence

It may be useful to note that much of the risk management in the oil and gas industry includes relatively low volumes of high consequence decisions, he said.

In this, it is similar to mining, medical, pharmaceutical, and some areas of finance. But it is different from many of the industries that have lead the acceleration big data and advanced analytics, such as web search engines, social networks, and online book and video stores (Netflix, Amazon). They typically have the opposite, a large volume of low consequence decisions, which often lend itself more readily to statistics-centric analysis.

Statistics based techniques are often easier to operationalize in industries with high volume of low consequence decisions, he said. However, make no mistake, there is a huge amount of untapped value in data that most oil companies already have, but most firms need help to unlock the hidden value in their data.

Higher consequence decisions require a different approach to advanced analytics, most notably more collaboration with domain experts. Effective collaboration requires more transparency in the analysis so that domain experts can validate that the results make sense.




Workflows

The company has an “Insights Workflow” aimed at finding ways to optimise your project, by maximising net present value (NPV), or internal rate of return (IRR), or maximum production.

It starts by gathering together all the various raw data sets in an oil company, including subsurface (G+G) data, drilling data, completion data, well location data, other well data, production data, and financial data.

It has a workflow for unconventional wells, aiming at helping companies make better predictions of production from a certain well. There is frequently a wide variety in production levels from different onshore oil wells with similar designs and costs.

This links to the decision of how much proppant to use. For fractured wells, in general, the more proppant you use, the more production you will have. But many of the high proppant unconventional wells in North America are among the most volatile on both a production and profitability basis.

The data very clearly shows that some high proppant wells, aka “Super Fracs,” have been goldmines, others have been disasters, he said. This is embarrassing at $100 oil, it can be crippling in a lower price market.


Purpose-built workflows

Transforming risk management with big data and advanced analytics is extremely complex. Very few firms have achieved material success with this initiative using traditional commercial software (COTS), largely due to the complexity of most COTS software.

Evolving how decisions are made with data-driven insights require three things: data management; analysis; and enabling decision makers to test “what if” scenarios pre-transaction or pre-drill. Each “module” that is implemented with commercial software is a high-risk IT project. Very few oil and gas firms have successfully completed three parallel high risk IT projects; even fewer have created the inter-module synergies required to create a cohesive data-driven workflow.

Occam’s Razor [“Entities should not be multiplied unnecessarily”] guides us to simplify each aspect of a complex problem, uniquely achievable in this case with custom data management, analysis, and activation software modules. Some firms may choose to develop these capabilities in-house. Others will partner with firms that have already developed proven solutions.

Your chance of success increases dramatically if you only include the capabilities you need to accomplish a specific task, Mr Birdell said.

For example, OAG Analytics has built tools to automatically detect well identifiers in data, something which is required for nearly every data set.

“Taking that purpose built approach has enabled us to reduce huge amounts of complexity.”

Purpose-built workflows can lead to lower cost, faster delivery time, and also reduced staff training time, because the software becomes much more intuitive, he said.

By comparison, commercial software typically supports every feature for every customer it has ever had. Virtually no single firm needs all of those capabilities.


How to make it work

Before you start a project, you should define your objective, he said. You should then develop a data strategy and then improve it with further iterations.

Pilot projects can be useful for testing methods to solve high value problems.

The greatest data-driven value creation we have seen in oil and gas resulted from collaboration among executives, geoscientists, petroleum engineers, big data strategists, data scientists, and big data software engineers, since virtually no one is an expert in all of these requisite disciplines.

It is useful to note where you have the most data. “If we get outside the boundaries of the data, we're extrapolating. The predictions will be nowhere near as reliable as predictions in the data rich area.”

Your data strategy must be able to continually evolve, because the type of data the industry works with is always changing and volumes are growing rapidly.

Many advanced data projects are stifled because companies want to keep to using their old (‘legacy’) technology, he said.

Machine learning can often uniquely isolate the effects of individual independent variables (stage spacing, proppant, fluid, location, etc.) on return on investment (ROI), i.e. quantifying the impact that the various independent variables have on oil and gas production.

Scalable machine learning solutions should be able to build and test many different algorithms, and then measure which of these produce the models that best represent real world system behaviour.

Be wary of ‘overfiltering’, i.e. reducing large and complex data sets to small samples with stronger correlations, as it often contributes to spurious conclusions.

One useful data visualisation technique is a graph showing the impact different parameters have on production.

You can also visualise data with a ‘chord diagram’, showing the magnitude of relationships among different parameters with lines connecting them; thicker lines inducate stronger relationship between parameters. As oil and gas data gets increasingly complex, we are seeing more data sets in which, “everything in the data set is related to everything else,” he said.

“Those characteristics have additional complexity that is frequently beyond the reach of traditional analysis techniques like XY plots. Machine learning can be fantastic for navigating this intricate web of complexity. It can isolate the effect of these individual parameters on what we're trying to understand.



Associated Companies
» Finding Petroleum


comments powered by Disqus

CREATE A FREE MEMBERSHIP

To attend our free events, receive our newsletter, and receive the free colour Digital Energy Journal.

FEATURED VIDEO

The future of subsurface data management? Building a data science lab data lake
Jane McConnell
from Teradata

DIGITAL ENERGY JOURNAL

Latest Edition January 2018
Jan 2018

Download latest and back issues

COMPANIES SUPPORTING ONE OR MORE DIGITAL ENERGY JOURNAL EVENTS INCLUDE

Learn more about supporting Digital Energy Journal