You are Home   »   News   »   View Article

Ways to improve production - report from Intelligent Energy

Thursday, September 13, 2012

Technical sessions at the 2012 SPE Intelligent Energy conference in the Netherlands covered a range of ways to improve production, on complex fields, old fields, across Africa, with thermal EOR and with fibre optics. With speakers who were employees of Shell, Chevron and Schlumberger

Complex fields

Keat-Choon Goh, principal optimisation engineer with Shell, talked about his company's systems to try to optimise the complex Sarawak gas gathering system in the South China Sea.

The paper was SPE 150109, 'Successful Real-time Optimisation of a Highly Complex, Integrated Gas System: Intelligent Energy in the Real World,' by employees of Shell and IPCOS, an optimisation solutions company based in Belgium.

The Sarawak gas gathering system has over 100 wells, 40 platforms, and 3 LNG plants.

Sometimes platforms are shut down, which involves complex synchronisation; some production lines have carbon dioxide and hydrogen sulphide; there is effort to maximise condensate production.

'We want to meet gas demand, and keep CO2 content at its allowable maximum,' he said.

To make things more complex, 'the first batch of wells and platforms have different contracts to the 2nd batch and 3rd batch,' he said.


Shell has to optimise on a well, field and asset level. 'There's a replication of structure of what we are looking at,' he said. 'All the way from multilateral wells to an asset wide basis, like a fern.'

The optimisation also happens at different time scales, he said. 'It's complex on multiple levels. This is a large scale optimisation problem.'

'Optimisation can be tried using Excel files, but its generally not very convincing,' he said. 'We want a fit for purpose optimisation system, to continually optimise production.'

Shell is now using optimisation software, supplied by an (undisclosed) software company.

It works out the optimum set-up so you can get the most condensate, keep within carbon dioxide rules, provide enough gas to meet demand, and maximise revenue.

Running the optimizer takes 3-4 minutes on a 4 year old laptop, he said.

The tool was not developed specifically for Shell, which should make it easier to make sure someone is around to upgrade it when it needs to be upgraded. 'We believe software sustainability is a major concern,' he said.

One audience member from Saudi Aramco said that his company is engaged in similar projects, aiming to maintain levels of gas production during optimisation, and a certain condensate to gas ratio, and optimising because different contracts are demanding maximum delivery of different substances.

Onshore 4D seismic - for thermal EOR

Kees Hornman, geophysicist at Shell based in The Hague, talked about how Shell is monitoring steam enhanced oil recovery at its Schoonebeek heavy oil field in the Netherlands, using seismic.

The paper was SPE-150215, 'Continuous Monitoring of Thermal EOR at Schoonebeek for Intelligent Reservoir Management' written by employees of Shell and NAM.

The field is monitored using time-lapse (or '4D') seismic (doing seismic surveys at regular intervals to try to work out what has changed).

The time-lapse seismic surveys could help show pressure and temperature variations in the reservoir and quantify them, and use the data together with well data.

Time-lapse seismic is much harder to do onshore than offshore, because there are more factors which can change from one survey to another, such as traffic or machinery noise. But Shell aimed to overcome these problems with a variety of different 'appropriate measures'.

The time lapse seismic has only been used over a portion of the field so far as part of a pilot project, but now Shell will use it over a larger area of the field.

The field was first discovered in 1943, produced until 1996 and then abandoned. It was re-started in 2009, to take advantage of recent advances in heavy oil technology, including horizontal wells, high capacity pumping units and steam injection technology, and is expected to produce a further 100 to 120m barrels by 2035.

The remaining oil is very viscous so needs steam to produce. However the oil layer is 20m thick and the reservoir has excellent porosity and permeability, he said.

Reservoir engineers want to get an understanding of how the steam 'chest' is progressing through the reservoir.

Ideally, 'steam propagates symmetrically,' he said. 'But a fault, even if only a few metres, can stop the steam layer for a few years.'

There are many changes which take place in the reservoir during steam injection, for example increasing pressure in the reservoir, seismic velocity changes due to increasing pressure, gas coming out of solution, the rock (reservoir matrix) expanding, heat in the rock above changing the seismic velocity. 'Pressure is propagated quickly, steam and heat are propagated slowly,' he said.

The geophysics models need to take these changes into account. 'You learn, update your models and react,' he said.

Shell is applying the same intelligent reservoir management systems it developed for the Amal West field in Oman, Carmen Creek in Canada.

Fibre optics for strain sensing

Shell is developing technology to monitor the strain which well tubulars are under using fibre optics, said Vianney Koelman, chief Scientist Petrophysics at Shell.

His paper was SPE 150203 'Optical Fibers: The Neurons for Future Intelligent Wells,' written by 3 Shell employees.

'I'm enthusiastic in general about this technology,' he said. 'We have tens of people in Shell developing this.'

Fibre optics were first used for temperature sensing, and acquired the acronym DTS (distributed temperature sensing).

Now Shell uses the acronym DxS, where the x indicates that you can sense all kinds of things with it. 'There's a whole group of sensing technologies, broadening at a very fast rate,' he said.

You can use fibre optics to measure strain by wrapping the fibres around the tubular, and measuring how much strain the fibre optic cabels are under, he said. You can measure this because the passage of light through them is different if they are stretched.

If a tubular is under strain, this can be a useful early indication of a well integrity problem.

In one example, 4 fibres were wrapped around a well tubular, enabling detection of exactly how the tubular was being strained. This typically gives a resolution of 1m (you can pinpoint a strain problem to a certain metre length of tubular), he said.

Fibre optic cables can also measure other parameters.

You can record seismic data downhole (using them as microphones), to get a better understanding of the subsurface between the well and a seismic source on the surface.

The signal to noise ratio on fibre optic acoustic sensing is 'not as good as state of the art geophones,' he said. 'But really what we want is a system which is less costly to deploy.'

You can analyse fluid flow around the well, by detecting how fast seismic waves go through the surrounding fluids.

Another technology under development is to use them for chemical sensing, where you coat the fibre in a special material which swells in the presence of certain chemicals. If the material swells, you can detect it in the light patterns through the fibre. 'Distributed chemical sensing is the least mature. We work with TNO in the Netherlands [to develop it]. We haven't put this in wells yet,' he said.

The fibres could also be used to monitor injection wells (in enhanced oil recovery), and monitor production of hydrogen sulphide.

The real potential is when you have all of these readings together, and have figured out a way to use all the data. 'Business integration is a term which comes back again and again,' he said. 'How to turn fibre optic sensing into value.'

Fibre optic cables can deliver data for every 6 centimetres along the cable, and it can all add up to terabytes per day.

This compares to kilobytes per day from permanent downhole gauges installed in the 1980s, or megabytes per day from distributed temperature sensors in the 1990s.

'We see an explosion in data rates - but is also a challenge to get information out of that huge pile of data,' he said.

You don't necessarily need terabytes. 'If we have pressure at 10 or 15 points in a well, for 99 per cent that's enough,' he said.

But still, the first stage is to actually gather the data. 'The real bottleneck is measurements,' he said. 'Petroleum engineers normally admit to that and say 'I'm always eager to get more data.''

'The promise is that we don't get a snap shot picture, we get continuous data.'

The cable survives for long periods and older cables can still provide data which can be used for high-tech data processing methods. 'We've got an old cable, come along with our light boxes, and it works,' he said.

'Does it have a promise for many tens of years? We don't know yet. It is a glass hair in the well.'

Shell works together with a number of technology providers under joint development agreements, he said.

Optimising old Texas field

Bill Taylor, Technical Team Leader at Chevron based in Texas, talked about how is company is optimising the McElroy Field in the Permian Basin, West Texas, which has 600 producing wells and 490 injection wells. A further 50-75 wells are drilled every year.

His paper was SPE-149668, 'Chevron's Digital Oilfields Solutions and Base Business Processes Maximise Value at McElroy Field, West Texas'.

The McElroy Field was first discovered in 1926, and water flooding started in 1948. The field has a variety of artificial lift systems and injection systems.

Today, total oil production is 9,500 bopd, with around 350,000 barrels of water per day production.

'We are moving a lot of fluid around the field,' he said. 'It is challenging to manage the fluid. There's a lot of complexity to what we're trying to do. 'The reservoir is more complex than you might think.'

Chevron implemented its Integrated Production System Optimisation (IPSO) program, part of Chevron's 'i-field' program.

The project aimed to put together workflows to manage the waterflood, gather the necessary data, apply filters, and present the results for review.

As a result, the number of injection wells within 10% of their target rate increased from 185 to 275 wells within 8 weeks, he said.

It also led to a reduction in annual decline in production from 13 per cent to 9 per cent, he said.

ISPO guides operators to come up with a plan for how much fluid they want to inject or product in the various wells, and try to see how closely they are following it.

There is a 'well event surveillance tool' runs every night, to analyse the data and identify wells which are worthy of a closer look, perhaps 3 wells out of the 1090 in the field. Then a meeting is held on Monday and Wednesday to look at them in more detail.

The software produces a map showing which injection wells are above or below target, with different colours. 'I said, it has to be very visual, I want things that stand out so I can tell what the problem is.'

'The goal was to pull all that information into a tool and manage the results,' he said.

Using statistical analysis, you can work out which wells are most critical to overall performance. 'We said, you don't have to look at all the wells, look at these 10 injectors and 4 producers.'

Once an action plan has been decided on, it goes into an 'action register', to plan the well work.

The field team also implemented Chevron's program Integrated Production System Optimisation (IPSO) process to gather data and look for bottlenecks in the flow.

There is another tool monitoring every piece of equipment in the field.

Managing African production data

David M Smith, Solutions Manager at Schlumberger Information Solutions in the UK and Ireland, talked about the project he has done with Tullow Oil to help Tullow get better production data for its non-operated wells in Africa.

His paper was SPE-149641, "Enhancing Production, Reservoir Monitoring, and Joint Venture Development Decisions with a Production Data Center Solution," written jointly with Tullow Oil.

The aim was to get production data into a common data format, so it could be more easily analysed at Tullow's production data centre in South Africa, to monitor production.

Tullow has non-operated assets in equatorial Guinea, Gabon, Conga, Ivory coast, altogether 500 wells with 7 different operators, he said.

There is a production and reservoir engineering team in Cape Town, where engineers monitor 5 reservoirs each, with between 7 and 200 wells.

Each field typically sends data weekly, in pdf, excel or Microsoft Access database format.

Tullow was missing opportunities to optimise (or suggest means to optimise) production because it could not see all of the production data in a common format, he said.

It could also not assess if wells were interfering with each other.

"They couldn't challenge the operators or suggest production enhancement improvements," he said.

New wells were being added or changing their designation around ten times a month, adding to the complexity, he said.

It was very difficult to track which wells were providing the production.

There was often a problem of data being incomplete or incorrectly loaded, he said. "Far too much engineering time was being spent on data management, leaving less time for value-added work."

A drilling infill program was increasing the number of wells being managed, adding to the complexity.

Tullow and Schlumberger decided to work together to try to work out the best way to work with the data.

They did reviews of the documentation available. They defined what was currently available and where they wanted to get to with it ('as is' and 'to be').

Tullow wanted "easy access to validated, trusted data, automated data loading, and a standard template," he said. "Efficient data transfer to reservoir models, consolidated reporting on the intranet, the ability to run daily allocations."

The final solution, "Production Data Centre", extracts data from the various file formats supplied by operators, including pdf files and excel files, and inputs it into the central system.

Pdfs need to be supplied in the same format week after week for the extraction system to work. "Pdf is the least preferred method," he said.

It took 4 months to develop, he said.

The software runs on a small single server system. "It is quick and easy to install and scalable," he said.

As a result, "we got validated and trusted data, more efficient data flow and improved data ownership," he said.

"Sometimes Tullow can alert operators [about a problem] before operators themselves are aware. Tullow can challenge the operator if necessary."

"A second set of eyes on partner data has led to increased production, enhanced reservoir monitoring, and better joint venture investment decisions."

A second production data management project was deployed in London covering European and North Africa fields, gathering together data from 25 different daily and monthly reports.

The system developed by Schlumberger builds on its Finder* data management system.

Improving IT reliability

Schlumberger has been engaged in a project to improve the reliability of its real time data systems, said Sebastien Lehnherr, real-time technology product manager at Schlumberger based in Paris.

His paper was SPE 150095 "Transforming IT to Sustain and Support Real-time Operations Globally" written by Schlumberger employees.

Schlumberger needed to translate its business need for more reliable IT infrastructure into steps that IT managers around the world could follow.

All IT managers are provided with a checklist to follow that will help maintain uptime, with questions such as, does the network cabling have a label, is there a generator (back-up power), is there a secondary WAN connection, are UPS generator tests done twice a year, does the GeoMarket (regional) IT manager attend GeoMarket operations meetings.

There are specifications for minimum hardware standards.

It included a target for the data communications level which would be expected at all well sites.

"We have a clear standard, very robust," he said.

Schlumberger monitors communications standards on all the rigs where the company operates, looking for factors such as average uptime and data latency.

"We are using this dashboard on a daily basis," he said. "People get alerted as soon as we have problems."

Since bringing in the system, "there's been an increase of 35 to 95 per cent of rigs that are reaching the standard," he said.

The results show that the company has had "zero catastrophic events since 2009," he said.

"All the key stakeholders get data about performance," he said. "We can keep the eye on the ball. It's made a really big difference."

When Schlumberger staff are working on rigs operated by other drilling companies, Schlumberger people will typically bring their own communications equipment with them, rather than rely on the system that is installed on the drilling rig, he said.

One of the biggest helpers when rolling out the system was "senior management," he said. "I don't think we could have got there without them."




CREATE A FREE MEMBERSHIP

To attend our free events, receive our newsletter, and receive the free colour Digital Energy Journal.

DIGITAL ENERGY JOURNAL

Latest Edition May-Jul 2024
Jun 2024

Download latest and back issues

COMPANIES SUPPORTING ONE OR MORE DIGITAL ENERGY JOURNAL EVENTS INCLUDE

Learn more about supporting Digital Energy Journal