Of Well Files, Coffee Stains and Cigarette Burns
INDUSTRY - Oil and Gas, PRACTICES - Content Enablement, SOLUTION - Information Management, SOLUTIONS - Industry Solutions
Randy Clark, Senior Principal - Noah Consulting
Several years ago, I was speaking with a data management professional and former geologist from a major oil company about digitizing well files. He related a story that illustrates just one of the challenges in managing well information. He said that one day he happened to open up an accordion file that contained a paper well log and started browsing it. As he was flipping through the sections, he paused because he noticed something odd about a particular log section. There were cigarette burns and coffee stains on the paper. “How did those get there?” he wondered. Was it simply left on the corner of someone’s desk for so long, it accumulated the stains and burns? Or was there something about the section that intrigued a geologist enough that he had spent hours examining it? I suppose we will never know. Sadly, a digitized paper well file cannot capture the thought process of the person who may have spent hours or days examining the log section. That is probably lost forever.
When it comes to well files, it is one thing to do the remedial work of digitizing old information. Maybe you can enrich it or supplement it, but you basically get what you get. If it is old and inaccurate, you will produce an old and inaccurate electronic well file. We all know the reasons why storing information on paper or spreadsheets on someone’s laptop is probably not a good idea, as the oil and gas industry is groaning under mountains of different types of data being captured every day by different means from different sources. It is not always easy to retrieve this data when it is needed. In some cases, data never reaches the intended recipient because it is misfiled or, even worse, lost. One of the worst-kept secrets in the industry is that, for many years, oil and gas professionals have spent more time looking for information than actually utilizing it. Some estimates are as much as 50% wasted time or more. In a recent conversation with a very senior petroleum engineer, he told me about how long it took to run simulations. He actually knew where all of the data resided, but it required four days to find, filter, and transform the data for use in his simulation model. I asked him how long it took to run the simulation. He said, “Fifteen minutes.” Sound familiar?
These days, when access to “technically validated information” (consultant term) is critical to making better business decisions, access to the right information by the right people at the right time can often make multiple millions of dollars’ worth of difference. Now that we have a keen grasp of the obvious, what is next?
The good news is that we no longer need to use paper to capture and store well information. We can do it electronically. The question is: “What information do we need for our electronic well file?” The answer is: “What do we want to use it for?” More and more, oil and gas companies are coming to the realization that they now have the capability of tracking information across the entire life cycle of a well. And not only have they realized that they can track it, but that it is a good thing to do so. That is, once they determine what information they want to persist across the lifecycle of the well.
Most oil and gas companies have a relatively similar lifecycle for a well: identify prospect, acquire lease, explore, develop, produce, and abandon or dispose. So why is it a good idea to track well information across its lifecycle? Information produced at various stages often touches or is touched by almost every other aspect of the business, including leadership, finance/accounting, engineering, legal, geoscience, land, commercial, drilling, completions, production, supply chain, HR and HS&E. (If I left anyone out, my apologies.) Sometimes, the information is only a well identifier; it could also be production volumes and allocations, decline curves, lease expiration information, expenses of every flavor, facilities and equipment information, personnel-on-board (POV), geological tops and picks, reportable incidents and…. I think you get the picture. Can you imagine how much better life would be for oil and gas professionals if they had fast, easy access to any and all information necessary to accomplish, even exceed, their performance objectives? Count me in.
There are a number of best practices emerging around electronic well file management (hat tip to my Noah colleagues, Steve Nuernberg and Steve Gardner). The cornerstone best practice is committing to a common well file rather than having well files that are built for individual departments. When you have multiple, department level well files, information becomes fragmented. In addition, duplication of information occurs because a lot of the information that is in a well file crosses discipline and department boundaries. As a result, people email copies to each other, copy them to their own department well file, etc. Granted, duplication by itself is not the end of the world; however, when you have more than one copy of a piece of information in circulation, it is extremely difficult, if not nearly impossible, to ensure all holders of that piece of information receive updates when changes occur. As a result, you have a mixture of current and out of date information. Creating a common, shared well file helps reduce the risk of working with incomplete or out of date information.
We are off to a good start, but there are more steps we need to do. Among them are creating taxonomy (a what?), capturing metadata, using synonyms, mapping the correct data, creating file naming standards, and implementing a good file management system.
A taxonomy is a simple hierarchy of terms used to classify various types of information. Originally used for classifying species in biology, taxonomy is now used more generally and can be applied to industry work; documents, data, and other content are classified based on the industry or subject in which it relates. Simply, taxonomies are used to organize information. Remember this from science class:
To use a taxonomy for domains other than biology, a hierarchy would need to be created based on a similarly structured way of thinking about classification. The hierarchy would need to ensure that anything within that domain is uniquely identified. A sample oil and gas taxonomy:
- Country - USA
- Area - Mid Continent
- Block - Permian
- Field - Horsehair Flats
- Well - Smith
- Discipline - Geology
- Data Type - Core Analysis
- Discipline - Geology
- Well - Smith
- Field - Horsehair Flats
- Block - Permian
- Area - Mid Continent
Creating a taxonomy enables users to locate data and understand where it is available for consumption. There are several ways to define a taxonomy, but creating the right one with the appropriate structure is critical. It is important to organize and map data hierarchically into various dimensions, such as geography, function, and business processes. Tagging unstructured data with defined, multi-faced taxonomical attributes allows queries and filter interfaces to display specific documents quickly and efficiently to users.
The most well-known definition of metadata is “data about data.” A taxonomy is actualized by applying metadata to structured and unstructured records. A general rule is to consider the metadata specification that defines fields like title, description, date, type, subject, etc., while creating the taxonomy. Several fields (e.g. type) should have pre-defined lists of allowed values. Using pre-defined values minimizes errors upon creation and increases successful queries. Designing the metadata elements for a taxonomy and designing the pre-defined values are integrated processes. Each pre-defined value, hierarchical taxonomy, or authority file will correspond to a different metadata field.
Synonyms should be considered for each facet of a taxonomy. To avoid multiple entries for the same document, it is suggested to utilize a synonym list for metadata definitions. As explained previously, pre-defined values are helpful, but may or may not contain synonyms. Also, in this context, synonyms may be classified as preferred terms versus non-preferred terms, since many are not true synonyms. These specifications serve as additional entry points or cross-references to corresponding terms within the pre-defined lists that help build a taxonomy. In addition to selected synonyms, non-preferred terms may be near-synonyms, alternate spellings, grammatical/lexical variants, slang or technical versions, phrase inversions, acronyms, and so on. Since terms in a pre-defined list are often not single words, but phrases of two or three words, there can be many possible synonyms for each term.
Identifying the correct data and making the required master data easily available to the business will save significant time and money. Once identified, data must be mapped against the business life cycle stages and business processes to ensure consistency throughout the enterprise. Keep in mind: identifying and mapping data is an iterative process. It cannot be done in one shot and should be looked at as an ongoing effort to maintain data integrity. As the knowledge of the data's intended business uses expand, there should be a continued effort to fill in more details about the associated processes.
File naming standards will enable each asset or division to better identify what data is available, while reducing data duplication due to naming inconsistencies. The recommended naming standard should provide adequate description and allow effective sorting based on data type and chronological data, eliminating the need for additional sub-folders. However, under no circumstance should dynamic data be placed in a file name. Only use attributes with static information, such as the API number and data type, which will not change or be altered in any way. The most important rule for effective file naming is consistency. Best practice file naming will only occur if the conventions are applied consistently for all files and by everyone involved in file creation.
Further, a corporation should typically establish an enterprise-wide file management system as a best practice. At a minimum, the system should support standard, unstructured data taxonomy designs and user access roles that enable efficient and secured data capture, search, and retrieval.
So if we could store all of the information we needed across the lifecycle of the well (which we can) in a common well data file and integrate it with each of the various functions and systems that could benefit from it (which we can), then how would we keep the whole situation from unraveling and degrading over time? This is actually the hard part. The answer is the development, implementation and institutionalization of a rigorous program of enterprise data management including data integration, data quality, data security and data governance (which we can, as well). For those of you who have yet to go through that process, it goes something like this.
First, determine the business drivers for better data management, preferably with the participation of both IT and business representatives. Cool technology is, well, cool, but the business should be the driver.
Second, develop a vision for your improved well data management. “Right information to the right people at the right time,” for example.
Third, determine your guiding principles. Guiding principles are basically foundational truths or methodologies of operation that link, direct, and show the way, irrespective of changes to goals. A guiding principle could be something such as, “Data will only be loaded once.”
Fourth, determine your scope; operated wells, non-operated working interest wells, wells holding leases, etc. Then decide what other information is necessary to accommodate your business drivers.
Now that you have done all of this, you are ready to build a framework that is aligned with and supports all of the aspects mentioned above for managing your well data.
One of the best ways to institutionalize any process or methodology is to regularly measure it against metrics developed to reflect its success at supporting the business drivers and the vision; then, incorporate that standardized measurement into the performance objectives of every level of the organization.
We have come a long way since the days of coffee stains and cigarette burns on log files and there is still a long way to go, but there is hope.
ABOUT THE AUTHOR
Randy Clark: Randy Clark is a Senior Principal serving as client relationship manager and project manager. Randy brings more than thirty years of experience in the oil and gas industry in a variety of management and executive positions, including six years as President & CEO of Energistics, the Upstream E&P information and data standards consortium and three years as SVP of Customer Relationship Management with Hubwoo, the global electronic procurement marketplace. He also spent over ten years in a number of roles with Baker Hughes. His substantial international business expertise has been gained by developing a strong contact network of executives, leaders and innovators throughout a successful career as a senior executive dealing in diverse global projects. Randy’s broad range of experience includes oilfield services, oil and gas production facility design and project management, customer relationship management, purchasing management, business process re-engineering, new product development, data and information management, and e-business strategy and development. In 2006, Clark was named one of the “Top Ten Most Influential People in Upstream IT” by Upstream CIO magazine. He is a past chair of PIDX, the e-commerce committee of the American Petroleum Institute and has served on the Executive Committee of the Society of Petroleum Engineers’ Intelligent Energy Conference. Randy has BS and MBA degrees and is married with two daughters and two granddaughters. He is active in his church’s music program and enjoys gardening, reading, photography and fishing.