An Easy-implementing Solution for the Petrel Data Management Challenges in E&P Companies.
Petrel® is one of the most used Seismic Interpretation / Modeling packages available in the Oil and Gas Industry. Some of the advantages of Petrel are: it is a Windows-based Platform, the friendliness of its user interface and the possibility of having several disciplines in one single place. There are also some challenges with its use, just like with any other software available in the market.
But, you know this already, so, let’s go to the point and outline some facts:
First, Petrel is a platform that has no restriction on the generation of data. You can create polygons, point-sets, surfaces, copy objects from the Input pane, paste them with the same name, change that name, the domain, the color the object has, you can do almost whatever you want!
For a Geologist or a Geophysicist, this is a dream come true (especially if you have worked with other platforms that implement very strict data management policies, like, for example, to create a project you need to ask for permission at many different management levels = bureaucracy), they can test multiple processes with the same data and have all the results available at first-hand. Paradise on Earth (well, at least on your desk).
For Data Managers, this is a total nightmare. Imagine the poor DM guy looking at a project he has to archive, in which there are seven unconformity surfaces called “Surface 1”. Estimate the time he has to spend sitting with the interpreter, deciding which ones to delete.
In the world of business and in these low-oil price times more than ever: Time = Money. E&P companies lose a lot of money when their resources spend too much time in these issues.
Second, Petrel is a modeling plaftorm. Models are constantly updated, especially if new data has been added to the project: a new seismic survey, a new well, another analysis, etc. That data has to be implemented into the model generating another version of it. Different models create different versions of the data. This, added to the point that you can manipulate data as you want, creates a huge mess if you don’t have a proper naming system. Now, estimate again the time the DM has to spend filtering the models with the geomodeler in order to archive the correct ones.
Third, Data Managers:
- Do you know how many Petrel projects are in the system?
- Do you know in which folders they are?
- How many of them are Reference Projects?
- How can you classify the projects in order to archive them?
There has to be a standard based on different features that projects have to meet in order to be archived; features like Coordinate Reference Systems, Units of Measurement, Petrel version, status of the project, etc. Checking out all these features from Petrel projects will take the DM a lot of time, due to the fact that all the projects have to have all their settings checked one by one (and it is very likely that the company has at least 200 projects).
Fourth, Petrel users:
- Do you know exactly where all your projects are?
- Would you like to know the date that a project got corrupted?
- Would you like to know who made the interpretation of those horizons and when?
- What would it happen if, for some reason, you misplaced a project in a folder and now you cannot find it? How much time did you spend doing everything?
All these issues bring consequences for E&P companies:
- High number of Petrel projects.
- Petrel projects duplication and versioning = more time spent on looking the correct version.
- Data duplication (approx. 30 GB per project) = less disk space available = more money spent on storage devices.
- Users and DMs waste valuable time looking for the correct data to work with.
How to solve these challenges?
We, Cegal Geoscience (previously Blueback Reservoir), propose a very simple solution for this situation: track projects and data inside all the storage devices – local hard drives and network shared drives.
The process is divided in three stages. The first one is analyzing the status of the system: how many projects the company has, where they are located, their coordinate reference systems, who created those projects, etc. A fingerprint will be assigned to each one of the objects inside the Petrel projects to quickly locate duplicated objects; everything with a plug-in that does all the work in the background and sends this information to a database, working in an invisible way to Petrel users, meaning that there are no blackouts (no lost time for users).
The second phase includes the analysis of the data and some quality control measures: Is the project active? When was the last time it was opened? Does it have the correct CRS? After that, in order to contain the mess and avoid that Petrel users keep creating problems with these issues, rules have to be implemented. Example: All seismic volumes have to be imported from Reference Projects or Studio repositories. If the rule is not complied, a notification will be sent to the Data Manager who can assess what to do with the project: call the user for a friendly chat, leave everything as it is, etc.
If DMs want to check coordinates, this is the best time to do so. The plug-in will also retrieve the original coordinates of each one of the objects and send them to the database. ArcGIS can be used to access the database and to plot those coordinates in a geographical map to QC their position.
All the information and metrics in one single database that can be analyzed in a web-browser application.
The third phase of the solution includes cleaning the projects with problems, archiving the correct ones and maintaining the system. From the third phase we go back to the first one, because this is a never-ending process. Every day E&P companies get new data and create new Petrel projects, it never stops.
What if the company has Studio® in place? No problem. Before implementing Studio, our solution will give the Data Manager the overview of the system and the opportunity to clean it, making it a nice complement. After Studio has been implemented, the plug-in can go into repositories and analyze the data that has been transferred between Petrel and Studio; also, users can create projects and data in Petrel without pushing them into Studio, thus, you still need a policeman patrolling the Petrel environment.
I have run an exercise in different E&P companies: it consists of implementing this application in their environment for a couple of days to retrieve the information about Petrel projects (Name, location path, CRS, quantity of objects, project owner, creation and last modification date, etc). What the companies get in return is a full report containing the total number of Petrel projects in different versions, an analysis of the low-quality projects (no CRS and not opened for more than a year), duplicated projects, duplicated seismic files, the data file sizes, list of corrupted projects and much more. You would be surprised to know that most of them did not have an idea of the messy Petrel environment they had. Having this system in place helps E&P companies to do the same type of reports on a daily, weekly or monthly basis.
Data Managers: can you imagine that you set up this system to run on a Friday and on Monday morning you have an overview of the Petrel database?
Petrel users: can you imagine that you lost a project but now you have the possibility of just giving a call to your DM to find out where it is?
Data Management may sound like a very boring topic that no Petrel user wants to hear about, but having a proper system to control the environment brings a lot of benefits for everyone in E&P companies. Less time lost = less money lost.
Petrel® and Studio® are registered trademarks of Schlumberger.
Perhaps there is a natural tendency to seek a broad range of geoscience answers from a powerful single software platform, such as Petrel®, without taking the time to appreciate how projects and data are structured within the relevant database. This approach can lead to errors and inefficiencies which will certainly be reduced by the repair of project and data irregularities as they arise. From another perspective, use of a broader range of 'geoscience tools' where appropriate to the geological and/or geophysical challenges confronted may lead to better data management and greater efficiency overall.