Unlocking the power of machine learning
Predictive algorithms and natural language processing are fast becoming valuable tools fueling improvements for operators and drilling contractors in remote BOP monitoring, human decision making, safety
By Stephen Whitfield, Associate Editor
As drillers adapt their rigs and workflows for optimized performance and improved safety amid the continuing oilfield digital transformation, a host of tech developers are coming up with new solutions based on innovations in the field of machine learning and natural language processing (NLP).
- Physics-based machine learning algorithms are being leveraged to optimize remote, real-time BOP monitoring.
- Machine learning and natural language processing are enabling the automatic parsing of structured and unstructured data, creating a platform to help humans with real-time decision making.
- Machine learning is being used to enhance the creation of safety reports, anticipate potential hazards based on past reports.
One of these companies is RigNet, which was already established in building AI-based data analytic applications for drilling. Last year, the company signed two multi-year agreements with Transocean to provide rig analytic applications through Intelie Live, a real-time analytics platform developed by subsidiary Intelie. The platform is being used to help improve operational integrity, efficiency and services across Transocean’s fleet.
Now, the company is expanding its machine learning platform with the Digital Decision Assistant (DDA) application, which builds upon the data analytics system by utilizing NLP to execute tasks and better enable different teams within an organization to share information with one another.
The DDA software includes a real-time data integration platform, which aggregates, processes and hosts both structured data (from sensors and alerts) and unstructured data (like written reports or chats from a designated chatroom installed in the software). Predictive analytic algorithms analyze this data to help workers with real-time decision making in mission-critical processes. It also provides base functions, such as data visualization, searchability, automated reporting, event detection, condition-based monitoring and classification.
Intelie began developing the DDA for wider use in the drilling industry in September 2018 after it received funding from Shell’s GameChanger fund. Hani Elshahwi, Digitalization Lead – Deepwater Technology at Shell, was part of the team that decided to approve funding for this project, believing that NLP could prove valuable to the industry even though the technology has not been fully explored within an oilfield context. It’s a belief he still holds today as he looks at new projects for GameChanger.
“My job is to think of how digitalization can change the nature of your work and the nature of your interactions,” Mr Elshahwi said. “How can we use natural language programming to harness insights and make knowledge available to the user on demand, as opposed to a traditional way of learning? That concept is not transformational, but it could have a significant impact on how we engage in knowledge management across the entire enterprise.”
Ricardo Clemente, Vice President of Product and Business Development at RigNet, said one of the key challenges to any digitalization effort is finding the best way to combine structured and unstructured data, which are often disconnected. The DDA platform works around this by using keyword recognition to search both types of data and present to users what it deems relevant. Users can also make manual queries into the data.
The DDA’s ability to automatically process unstructured data and then blend it with structured data is key, he said, noting that it can even adjust to the complexity added by the technical jargon of drilling operations. “Even inside a single drilling operation, you could have the well formation testing guys using their own language subset and the rig guys using a different subset.”
Mr Elshahwi said one of the reasons GameChanger decided to invest in the DDA was because it had an infrastructure built to leverage NLP to handle the complexity of oilfield communications.
“You need to have a reasonable number of end users who use a common protocol for communicating things. The infrastructure you build to handle this communication, and the learnings you gain from the communication, must be scalable into mission-critical realms, like real-time operations in drilling. This is an area where we have relatively well-behaved, well-characterized and well-governed ways of working,” Mr Elshahwi said.
For the DDA platform, Intelie applied a Naive Bayes classifier, which assumes the value of a particular feature is independent of any other feature, in order to identify control statements – statements that determine whether an action will be executed – from chat data. Data from real-time sensor streams, chats, manual notes and annotations reside within a data repository, along with outputs from recognition, notification and alarm agents. This allows users to accumulate data from previous activities and use that to predict future events while a similar activity is in progress.
Wireline testing
So far, Intelie has tested the DDA on wireline pressure testing and sampling operations. Wirelines was selected as a good test subject because of data availability and because these operations foster strong collaboration and communication between the operator’s subject matter experts – who are often monitoring systems remotely – and the on-site third-party service personnel.
The machine learning and NLP capabilities in the initial implementation were limited to focus on three areas:
- Real-time data ingestion, conditioning and fusion;
- Capture of chats and user commentary; and
- Visualization and contextualization.
For this test, machine learning algorithms were developed to classify and categorize chat data as an initial processing step, sorting out which statements must be further analyzed. More focused NLP algorithms were also used in this step to extract parameters for the higher-level machine learning algorithms, utilizing parts-of-speech tagging and entity recognition to sort words from chats and documents into various categories of speech (nouns, verbs, adjectives, adverbs).
The testing involved data from two unnamed vendors, which was then processed through the DDA platform with the same visualization and contextualization parameters. Data from each wireline pressure test was ingested, conditioned and aggregated into a visual format showing key parameters from the tests.
The system captured chats and user commentary produced in its designated chat room and correlated them with real-time data from the tests and workflow agents to add context to an operational timeline. Event detection algorithms ran concurrently with the data from the tests to log events and notify users of their occurrence. An automatic analysis tool within the system allowed users to choose specific data channels to monitor within the test and compute statistics for those data channels within a selected time frame.
The test successfully showed the DDA’s ability to coordinate operations and to initiate and orchestrate different operational workflows and agents. The platform allowed users to call functions and initiate agents from within the built-in chatroom.
The DDA version without NLP has already been deployed to major operations, with more than 3,000 annotations from engineers in one operator, according to Intelie. The DDA with NLP capabilities has yet to be deployed in a real-time operation, but Intelie has established the configuration and connectivity of the platform in near-real time mode with a variety of service providers; more work will need to be performed on the system before it can be deployed in real time. The company plans to augment its machine learning and NLP capabilities to allow for an expert knowledge capture and retrieval system that can be used with archived operations; it will also enable pattern recognitions of impending failures.
Vitor Mazzi, Software Development Manager at Intelie, said the knowledge capture and retrieval system would allow for easier connections between the DDA platform and existing user databases.
“In the same way that you can put data in the proper context, you can expand that out to the pre-existing databases that different companies may already have,” Mr Mazzi said. “If I have a tool that can link with, say, Service Provider X’s anomaly reporting database, or whatever Operator X, Operator Y or Operator Z has, all the KPIs that we’re generating with our system that already have sufficient context, we can add even more context to them.”
Remote BOP monitoring
Companies are also looking at leveraging machine learning to provide greater insight into the health of critical components of a rig. Last year, researchers at the University of Houston developed a physics-based machine learning (PBML) model to enable real-time monitoring of the condition and performance of the blowout preventer (BOP) annular, a major contributor to unexpected BOP-related downtime.
It combines elements of physics-based models and machine learning models, putting the physical characteristics of a BOP annular system into a graph that illustrates the health of the system. A standard machine learning model produces estimations of the output measurement based on a training dataset collected from the field. Physics-based models estimate a target variable given its physical relationship with measurable inputs. The PBML combines the two, utilizing the empirical methods that predict system output based on an input, while also adapting the parameters of a physics model to focus on a specific annular.
The PBML model takes a chosen segment of input-output data from the field and analyzes the differences in measured and estimated data from that segment; the findings are then used to calculate changes in the model’s parameters, which correspond directly to changes in the BOP annular.
Dr Moadh Mallek, Data Scientist at Aquila Engineering who previously worked on this model as a Research Assistant at the University of Houston, said a clustering technique was needed to use these new parameters to identify the healthy and unhealthy parts of the BOP. Those are then represented in a graph to identify annular failure modes. “It is a tool to develop boundaries for identifying regions that share the same properties,” he said.
The UH model was built using lab data from an oilfield equipment provider and has not been used in the field yet. Dr Mallek said it is unclear whether the data supplier will use this model in field applications but, regardless, conclusions from the research could help to inform future development of a BOP monitoring system.
Physics-based models are also at the heart of the predictive performance elements in Aquila Engineering’s BOP real-time monitoring system. The software utilizes automated advanced analytics along with fault-tree analysis, a failure analysis graphical tool that connects a series of lower-level events to the component-level failures that cause larger, system-level failures.
“It’s all about predicting how the asset that we’re tracking will behave in the near future. We’re looking for potential failure modes, so we can help the driller implement a maintenance program,” Dr Mallek said.
The concept of remote BOP monitoring is not new. However, Mark Siegmund, Director of Stakeholder Engagement at Aquila Engineering, said the company’s real-time monitoring system is unique because it has the ability to record and analyze BOP data from drilling operations, function tests and pressure tests to develop a deep understanding of the BOP’s integrity over its complete operating lifecycle. Analytical health tools capture the early onset of failure through sensitive detection of wear or degraded functionality and can automatically alert all stakeholders as they occur.
“The models are very smart,” Mr Siegmund said, noting that they’re physics-based adaptive models specific to product brand and type. They also account for time, number of cycles, performance degradation and are compared with historical trending and failure evidence of similar components in the company’s event management database.
The system is commercially available, using subject matter experts in a real-time operations center, and can enable operators to go 21 days between BOP tests. The standard requirement by the US Bureau of Safety and Environmental Enforcement (BSEE) is that operators go no longer than 14 days between pressure tests. However, last year, the agency amended its well control rules to allow operators to request a 21-day testing frequency, provided they submit and obtain approval for a health monitoring plan for the tested BOP that consists of continuous system component monitoring.
Extending the BOP test frequency provides improved drilling program flexibility when, for example, drilling a long hole section without the benefit of a cased hole before halting drilling to conduct a BOP test. “It’s nice to have the flexibility within a 21-day program to finish that hole section, have the well cased and perform a BOP test without any detriment to the well integrity and the critically important long hole section,” Mr Siegmund said.
The company’s ultimate goal, he continued, is to develop a BOP remote verification (BRV) system that can allow operators to verify the integrity of the BOP while reducing offshore travel. A pilot BRV test has already been completed on subsea BOPs with a major operator in the US Gulf of Mexico, with onshore personnel verifying key API pre-deployment BOP tests and remotely confirming the operational integrity of the BOP. This capability will reduce the number of third-party personnel needed to go offshore.
“We have digitalized these API tests based on the drilling contractor approved test plans and implemented error-proof sequencing logic, with limits on accepted function times, and targeted control fluid volumes and flow rates. The test displays have taken all the guess work out of test verification,” Mr Siegmund said.
The latest pilot project, which was launched in late April, demonstrated the viability of the system to remotely verify all pre-deployment tests. The software was able to perform pre-test logic checks to ensure system readiness and that all components are in the correct closed or open state. The outcome of various tests, like an emergency disconnect sequence test or an accumulator drawdown test, is quickly determined in real time by illustrating all components meet API requirements and that previously documented boundary conditions for timing and hydraulic fluid volumes and flow rates have been met.
This pilot proved the automation, logic and digital testing attributes of BRV while also familiarizing the operator and the drilling contractor with the benefits of the system, Mr Siegmund said.
Safety reporting
Safety is paramount on the rig, and tech developers are also leveraging machine learning and NLP to help drillers optimize work procedures and minimize on-the-job hazards.
One area where this work is bearing fruit is in standardized safety reporting. In early 2018, Maersk Drilling hired Halfspace, a Danish IT service company that specializes in machine learning and data analytics, to develop a system that would help the drilling contractor make better use of its safety reports and improve the way lessons learned from those safety reports are shared across rigs.
The company had thousands of safety documents from its 24 rigs, all of them in different formats, ranging from XML files, PDFs, Word documents, Excel spreadsheets and even hand-written documents. Many of them were difficult to read and even more difficult to explain to others within the company. On top of that, due to limited internet access at sea, these documents were often stored on local servers.
“The possibility of misunderstandings and errors is quite significant when you have large variation on safety reports,” said Simon Kristiansen, Head of Research and Development at Halfspace. “When you have different templates, different languages, different calculations for safety reports on a rig, the quality of those safety reports can fluctuate.”
To address these inconsistencies in reporting, Halfspace developed a single format language, standardizing document wording, abbreviations and expressions. It also developed several machine learning algorithms that analyze the language in these standardized reports to enhance the workflow of creating safety reports and work instructions. These included recommendation and suggestion algorithms that helped speed up the process of drafting a report and anticipate potential hazards based on past reports.
Claus Bek Nielsen, Founding Partner and Managing Director at Halfspace, said that establishing the single format language was a joint process between his company and Maersk.
“After we developed the proofing algorithms and roughly sorted the data, Maersk had its people go through the data and made sure the tasks we identified were correct. It’s why it was important for us to have a very close partnership on this project. We were really honest about saying to them that, while we know how to build algorithms and handle data, you have been in the drilling industry for so many years. We need your knowledge to work with the language of the oilfield. It was a way for us to quality-check whatever we were doing,” Mr Bek Nielsen said.
The project took a year to complete, with Halfspace delivering the final algorithms for use in early 2019. A software company then incorporated the algorithms into a program for Maersk, whose workers now receive automated intelligent and qualified suggestions on safety procedures based on those algorithms, helping to minimize the risk of work accidents. DC