You are Home   »   News   »   View Article

AspenTech – what ‘Asset Performance Management 4.0’ means

Friday, September 9, 2022

'Asset Performance Management 4.0' could be defined as building a digital model of the equipment - so you can see where problems are emerging and better understand the risks and what to do about them. Mike Brooks of Aspen Technology explained more

There have been a number of iterations of Asset Performance Management (APM) systems over the past decades, from basic data gathering and analysis (1.0), information sharing and data integration (2.0), and condition based or rules-based maintenance (3.0).

APM 4.0, building on this, is using high fidelity pattern recognition and digital models to compare what is happening with what should be happening, and any course of action. says Mike Brooks, global director of APM Solutions with software company Aspen Technology.

Mr Brooks has been working in the oil and gas industry for 24 years, including at oil majors ExxonMobil and Chevron. He worked in Chevron's venture capital division for 4 years. He was also a leader in five industrial IT startups.

APM 4.0 software could show you a map of where the specific areas of concern are on your plant, not just where the sensors are showing concerning data, he says. The main difference is that APM 4.0 wants to assure not only that it is available and running, but that it can operate at peak performance for its lifecycle, and that the money you spend on asset health is directed towards the areas that most constrain the operation. Depending on the level of sophistication, it could show you an emerging problem before it gets serious and give you advice about what to do about it.

The software uses data patterns, models, and simulation, including simulations of fluid flows and equipment. It analyses the possibilities, and what they could lead to. It can work out the problem areas.

You can work out the probability of certain events occurring, such as the probability of a machine degrading to a failure.

The simulation can be used to plan shutdowns and work out the best time to shut something down, or if there is a way to keep the plant running for a short time while maintenance is done, such as by filling intermediate tanks to be able fulfil the product deliveries while the service and repairs happen.

In one example, a simulation model was made of a refinery which had petroleum coke as one product, being loaded onto rail wagons. The model determined that the biggest constraint in the whole process was the supply of empty wagons.

The system may be used to reduce leaks, not necessarily by detecting them, but by identifying patterns which may lead to the machine leaking.

'For criticality, risk, and cost analysis We build a model of the whole process, it doesn't matter if it's a chemical plant, refinery or upstream, mining, pharma, and so on. You can simulate all the way through,' he says.

Building these models is not cheap or quick - but then neither is equipment downtime or reduced capacity due to storage concerns, etc.

The models can show which items are the most critical, and so you can better understand the costs and risks if that item is not working, and so decide about what to do.

The software is designed to help companies maximise the performance of the asset. This is the most important business criteria after safety, environmental, and legal issues, he says.

Asset performance is not necessarily about 'utilisation' - a high performing asset is one which is needed when the surrounding operation needs it, not necessarily available all the time. It may also be one which operates most efficiently in terms of energy consumption or other costs, or which can maintain output quality.

APM 4.0 can be compared to getting a car tuned - it isn't just about making the car reliable; it is about making it run better, Mr Brooks says.

Maximising performance involves continuous monitoring, detecting, quantifying risks, knowing what to do, predicting when you might go off target, and executing the changes that could bring things back to full performance.




Fitting with how people work

At the same time, experience has shown that it is very difficult, if not impossible, to introduce new software to an organisation, if using it would involve changing the way people work, Mr Brooks says.

Some oil company customers have said they do not wish to buy software products which demand large changes in work processes, no matter how good the technology is.

It does not always work to try to persuade people that the technology is better than what they currently have, because people typically overestimate the value of what they have got and know. Mr Brook cites the famous quote from James A. Belasco and Ralph C Stayer: 'change is hard because people overestimate the value of what they have, and underestimate the value of what they may gain by giving that up.'

AspenTech found it may be better to make software which fits into the way people already work, or which has a company's work processes built into it. 'It can't have too many changes away from that work process, it's got to be easy to follow. It's no use throwing technology at them.'

Consider alerts sent by the software. It is easy to make software systems to make alerts, as we all know, but it is only useful if someone does something with the alert. People need to have confidence in the alert, then be able to evaluate the consequences (if whatever you are being alerted about happens), and then be able to do something to prevent it happening.

'We spent a lot of time looking to see how we can improve that.'

When the software presents its analytics output to people, it is more useful, and easier to fit into people's working processes, if it gives them options, rather than put numbers in front of them. For example, they could choose between two outputs from the modelling, one which predicts a longer time to failure but with a lower accuracy level, the other the opposite. The choice might depend on how they see the level of risk, or their assessment of the model.

Software should not overwhelm people with complexity. Like with a smart phone, the 'smarts' are kept on the inside of the machine, not the outside.

Some of the software tools use machine learning, but the user doesn't necessarily need to know about it. Indeed, Mr Brooks advises his own sales staff not to focus on machine learning when they promote the products; it's about the job the software does, the specific use case for specific personnel roles.

Just having machine learning is no longer a differentiator for software companies, he argues. 'Every [software company] is doing machine learning. It's like saying, we use C++. So does everyone else. The real question is what you are trying to do with machine learning.'




For operators

As the technology matures, AspenTech is increasingly able to provide tools to equipment operators to help them understand and stabilise a problem, rather than having to pass the data onto data scientists or engineers as an interim step.

A vibration problem could cause equipment damage within a day, so there is a big benefit in operators knowing how to solve the problem, rather than give it to engineers to resolve, which can take a number of weeks.

The operators need to 'get the equipment to a safe place,' Mr Brooks says. 'The most important thing is to make sure equipment is stable.' AspenTech detects the inherent patterns, links the information in data streams on behalf of the users, and helps them develop corrective insights. The system continues to learn and shares the learnings with other users.



Failure modes

One of the most important functions of the software is understanding how degradation leads to failures with equipment. The data pattern-driven approach means that the understanding of failures can be data driven and completely objective.

This makes it different to Failure Mode Effects Analysis, a common industry technique. FMEA 'is held in the industry as a silver bullet. Personally, I don't think it is,' Mr Brooks says.

'It is based on inductive reasoning [developing a theory] not deductive reasoning [testing a theory].'

'We are taking FMEA and adding AI/data driven constructs. That way you can get much closer to the truth.'

With equipment, there are 'causes' of problems and 'failure modes' which are not necessarily the same, although with our machine learning pattern matching it has been possible to link them together, only one multi-dimensional/temporal data pattern gives a one-to-one link between cause and failure mode.

For example, multiple different cause conditions can lead to the same failure mode. You can be sure there will be a failure even if you don't know why the failure will happen. But AspenTech can tell you why it fails and how it fails.

In one case, an oil company had seen five bearing failures and thought they had the same root cause. But data analysis showed that four were similar, but the fifth had a different pattern, showing it was caused by something different.




Criticality measurement

Another important part of the software is determining which equipment is most 'critical' - so you will prioritise your attention on it.

Again, the model-based approach enables this to be done in a more objective, data driven way, Mr Brooks says.

'The ways that people [normally] measure criticality for asset equipment is very dubious,' he says. For example, they try to calculate a 'risk priority number' for each item, by multiplying a number of estimated risk factors. But if two of those numbers are a little wrong, the multiplied number can get very wrong.

These risk priorities can be 'decided by a group of risk professionals who have been in the business for a long time. It's apparently all based on opinion, but data can lead to the truth,' he says.

The criticality of a piece of equipment depends on its role in the wider process, and that can be different on each site.

For example, a catalytic cracker fractionation column in a refinery cannot function if the wet gas compressor, which moves the vapours away from the column, is not working. It is a serious problem, recognised as critical since a quarter of the refinery could shut down. But the top pump-around pump could cause the same and is rarely deemed critical.




Anomalies and machine learning

A third important part of the software is looking for anomalies - something in the data, or the relationships between data, which shows something which is not normal.

Understanding whether you are looking at an anomaly is not easy. Inaccurate models produce false anomaly alerts. The AspenTech technique produces highly accurate results through direct pattern recognition in data stream, without intense knowledge of engineering or data science. Regular manufacturing staff can build them without knowing more than they know now.

Machine learning is used because it is the superior way to compare multiple data streams. People can personally typically monitor about 3 data streams. Some Computer logic (without machine learning) can be written to work on a limited number, understanding how one data stream usually relates to another.

But a compressor on an oil platform might have 100 data streams from its sensors, and machine learning can see across all those 100 data dimensions, where humans and other technologies cannot.

Some machine learning tasks are classified as 'supervised and unsupervised learning'. In supervised learning the computer examines multi-dimensional and temporal data that results in a specific event such as a machine failure. But unsupervised learning uses clustering techniques to determine what normal behavior looks like. You show the computer different patterns, and it can see the difference between patterns which show normal operations, and patterns which show a problem.

'We have a special process that reduces the 100-dimensional picture into 2D. It's not exact but better than one scenario by itself' and gives the user a much clearer picture of what's actually happening.

'Humans always gravitate to, 'what's the one sensor that's telling me that its wrong,' Mr Brooks says. 'It turns out that's not the deal. It is the relationship between multiple sensors that tells you where it's going. One sensor tells you when you are close to failure.'

Relationships between Different sensors change as a problem progresses and impart the most useful insights.

'As it gets closer to failure it is the vibration that tells you. [But] if you only look at vibration, you're going to find damage that's already happened. You want to find it before the vibration tells you there is damage.'

By analysing the patterns between the data, it can be possible to learn about patterns which emerge months before any visible failing. 'Anyone can tell you a compressor will fail two days before [it fails], but to do it two months before that, takes a lot of understanding of the patterns,' he says.




Audio sensors

Audio sensors (microphones) could achieve much more than they do now to detect problems at an early stage, just as the earliest warning of an engine problem could be an experienced engineer noticing the noise change.

'I have a lot of respect for audio signals, and even outside the frequency range that we can hear,' Mr Brooks says. 'We don't have the right sensors yet, but they are coming, I've seen some early prototypes. Sometimes these sensors take 10 years to develop.

Because the audio signals are being analysed using machine learning, it is 'very tolerant of signal noise,' Mr Brooks says. 'You're looking for a pattern, and for how one pattern compares with another one. ML doesn't care what the signals are - it's just looking for patterns. You can push the audio data into machine learning.'




Self-optimising

The goal is to make plants which could be considered self-optimising. 'The Self-Optimising Plant (SoP)' takes information from the various planning, optimisation, and asset health software packages and amalgamates them,' he says. Then it determines what needs to be done.

For example, if there is a problem with a compressor being over-driven, the SoP application could find a way to change its operation so that damage is not being caused and a failure is avoided.

The technology is advancing in this direction. 'That's where we're heading, that's a major corporate direction,' he says.

Getting there is about bringing the workflows together between different software tools, such as historians and models. There will be messages and then workflows exchanged between applications, something which Mr Brooks defines as 'interoperation', rather than integration.





Setting it up

The Mtell product for reliability engineers could take as little as a few hours to set up for a simple piece of equipment, or 2 days for a bigger compressor, or a number of months for an entire plant, including onsite testing.

It can be possible to set up a number of different machines at the same time. You can start with a small data pattern recognition model and then expand it rapidly across many machines of the same type.

As a minimum, AspenTech needs the collection of enough data to be able to work out what is 'normal' across say a year or more. The Mtell predictive application can start to learn and share knowledge from that point.

It doesn't necessarily need large volumes of data - in one example, a system was set up on a slurry pump which only had 4 sensors, looking at temperature and pressure of fluids going in and out and motor amps. The software was able to provide 4 weeks' notice of an impending problem.

For a bigger pump on a refinery, which had 50 sensors including vibration, the system could identify problems 4 months before they happened.

AspenTech recommends that 5 is the minimum number of sensors to use for pattern recognition. 'It doesn't care what kind of machine, as long as you've got the data,' Mr Brooks says.







Software products

The Asset Performance Management tools are developed for different users and different use cases.

The 'Aspen Mtell' product, for reliability engineers, looks at reliability, and ways to stop machines breaking or other asset problems. It is analysing sensor data to get an idea if there's degradation happening, and then prescribes what maintenance task should be done.

It can identify problems before a person does, although a person would then decide if it is important enough to make any change.

It can also help a person better understand the problem which generated any alert.

It shows images where people can visualise what is working and what is not working on their plant, or where the concerns are.

The product ProMV is for process engineers, looking at broader processes with chemicals and manufacturing, predicting where errant conditions may occur, and what can be done to correct the issue. It suggests adjustments that can help avoid problems with yield, quality, and waste product.

In upstream oil and gas, this system could be used to help companies reduce flaring, by identifying the operating patterns which lead to flaring being needed, analysing what is happening in real time, and what changes might help reduce likelihood of flaring. An unexpected shutdown always imposes risks, and from a sustainability perspective the carbon release during one flaring incident can surpass all the releases for a year. A major advantage here is detecting an impending issue with sufficient time to plan a safe and orderly shutdown that will not lift the flare valves or lead to unsafe conditions.

In one example, AspenTech helped an herbicide company identify that it could create a superior product by having less heating in one stage, and faster cooling in another.

The Fidelis product is for planners and plant engineers, so they can determine decisions which would make the best return on investment, covering both CAPEX and OPEX. It can be used both in plant design and plant operations.

It can look at one entity within the plant, or how it is interacting with others around it upstream and downstream, including operations, maintenance, logistics and other factors. It can quantify criticality of any individual component in terms of its relationship with other assets and events, and its real effect on the plant profitability.

Fidelis can help you work out what to work on first, or what events are most likely to cause problems, such as weather, flow limitations, personnel shortages, storage limits, or shipping limitations.

Aspen Event Analytics is a tool for front line workers, especially operators, to help them identify 'not OK' operating data patterns and advise them what to do about them.

For example, a control system indicated an abnormal and erratic pressure change on a compressor. The Event Analytics could show which sensors had a changing pattern, so the operator could quickly assess, understand and correct the problem.

This software is reactive rather than predictive. Sometimes the operators will observe that something unusual is happening, and use the software to better understand it, or see if the same thing happened in the past.

Each individual application is developed around the problem they solve in the domain, not what the technology specifically does. 'I tell our [sales] guys, don't start talking about technology, if you're doing that, it all sounds the same,' Mr Brooks says. So, talk about the users, their roles, and how we help solve their specific problems.

Its approach combines domain expertise, pattern recognition of specific equipment, and modelling of the system and process the equipment is part of. 'You can't separate the machine from the process, or the process from the machine,' he says. 'A machine will operate differently in a different process. If you add the domain knowledge you can get much closer to the real solution.'

'With a data science product, you can't do anything until you understand the domain,' he said. 'We worked hard to have that domain knowledge built in.'

Templates are used as a starting point for building the pattern recognition artefacts, that we call Agents, of common equipment such as pumps and motors, so the work can be half completed at the start. Templates include the typical sensors included on the equipment, and the typical failure modes. Agents are pieces of software that do the work for the end user, they contain the smarts in engineering and data science for doing rote and repetitive function far more often and faster than humans.

Some of the software is cloud hosted, some is not. 'A lot of energy companies do not want their software supported in the cloud, usually for security reasons,' he said. 'They want software that can be installed inside their firewall.'

'I don't think [user hosted] is a bigger installation task,' he says. 'The cloud makes it much easier for vendors like us to provide and scale. But for individual users, unless it's a big corporation, it's maybe not such a big deal.'




Associated Companies
» AspenTech
comments powered by Disqus

CREATE A FREE MEMBERSHIP

To attend our free events, receive our newsletter, and receive the free colour Digital Energy Journal.

DIGITAL ENERGY JOURNAL

Latest Edition May June 2022
Jun 2022

Download latest and back issues

COMPANIES SUPPORTING ONE OR MORE DIGITAL ENERGY JOURNAL EVENTS INCLUDE

Learn more about supporting Digital Energy Journal