E. O. Nestvold
IBM
HoustonC. B. Su
IBM
DallasJ. L. Black
Landmark Graphics
DenverI. G. Jack
BP Exploration
London
Two-way travel time surface of top reservoir event, Kuparuk field, Alaska. Data from a 3D survey slotted into the center of the survey show much greater detail than do surrounding 2D data. Image courtesy of BP Exploration (Alaska) and Arco (Fig. 1).The significance of 3D seismic data in the petroleum industry during the past decade cannot be overstated. Having started as a technology too expensive to be utilized except by major oil companies, 3D technology is now routinely used by independent operators in the U.S. and Canada.
As with all emerging technologies, documentation of successes has been limited. There are some successes, however, that have been summarized in the literature in the recent past.1-7 Key technological developments contributing to this success have been major advances in RISC workstation technology, 3D depth imaging, and parallel computing.
This article presents the basic concepts of parallel seismic computing, showing how it impacts both 3D depth imaging and more-conventional 3D seismic processing.
Example: 3D survey
What is 3D seismic? It is a tool which allows us to see into the earth correctly-for the first time.
The image in Fig. 1, presented by Speers and Dromgoole8 and Baird,9 is that of the top reservoir of BP and Arco's Kuparuk field in northern Alaska. The two-way travel time surface for the top reservoir event is shown as color dipping down to the northeast from red to blue. The image is also artificially illuminated from the northeast and displayed as intensity, which accentuates changes in dip and azimuth, highlighting the dense fault pattern within the reservoir.
A 3D survey is slotted into the center of the image. The image incorporates the widely spaced 2D seismic data acquired in the areas surrounding the field. The difference in the level of detail in the 3D dataset in the center of the image and the surrounding 2D data is dramatic. For example, one does not expect that the small-scale faulting would stop suddenly at the edge of the 3D survey. To recognize the true degree of reservoir complexity from the 2D seismic data alone is simply not possible.
This, to us, is a very clear illustration of the power of 3D relative to that of 2D geometry.
Due to reservoir compartmentalization, a significant infill drilling program is needed to maximize recovery in Kuparuk field. Speers and Dromgoole8 report that this 3D survey has helped position wells at optimum locations by confirming the identity of small-scale faulting and by helping to resolve areas of questionable fault continuity. The economic return from this 3D survey for BP and Arco is immediate, because wells that are drilled tend to be successful and those that would have been marginal, at best, are not drilled.
3D seismic imaging
In the early days of 3D seismic use-the 1970s and early 1980s-surveys were commissioned largely to appraise a discovery, and then only over a small area surrounding the discovery well. Since then, the use of 3D surveys has grown to include exploration surveys, on the one hand, and surveys over producing fields, on the other.
Today, large exploration surveys are being commissioned in highly prospective areas which may have little or no well control. The rationale is that exploration targets in the survey area will be identified and mapped by the 3D data, even though the location and size of these targets is not known beforehand.
Also, in the early days, surveys were not commissioned over producing fields on the grounds that an abundance of wells provided adequate subsurface control and that 3D seismic acquisition would constitute overkill. In recent years that theory has been dispelled over and over again as operators have added significant value with 3D surveys over producing fields. Several examples are discussed in Nestvold.10
Technology developments
A number of technological developments have contributed significantly to the wide acceptance of 3D seismic data during the past decade.10 11 Several of these are:
- Multi-streamer, multi-source, multi-vessel 3D marine technology.
- Onboard processing of navigation and seismic data.
- Depth imaging-now coming into its own principally in the Gulf of Mexico.
- Workstation technology.
The main contribution of these developments has been to make the 3D product much more available (through price and time) and impactive (through visualization). And with acceptance and use of 3D technology growing, the challenge has become computational as the industry advances beyond conventional, but already data-intensive, 3D processing into more complex techniques such as depth imaging.
Parallel seismic computing has been crucial to this progress. In fact, it is high-performance parallel computer technology that makes 3D prestack depth imaging viable as an exploration and production tool.
Seismic crunch
The objective of seismic processing is to transform a large amount of data into an accurate image of the subsurface. Although compute power is steadily increasing, we tend to increase our input data volumes and compute requirements as we add compute power.
Fig. 2 [27245 bytes] shows the approximate compute power of different computer systems. A log vertical scale is needed to capture the enormous difference between the compute power of PCs, on the low end, and multi-node IBM SP-2s, on the high end.
In Fig. 3 [23207 bytes], we see the volume increase in the input raw seismic data over the same period. The top curve (samples per square kilometer of 3D data) is probably the most relevant. This accelerated very quickly as the industry switched to 3D and now seems to be tapering off a little.
In addition to expanding the input data enormously, the industry also continues to apply more intensive processing algorithms. Fig. 4 [21997 bytes] shows the increase in floating-point operations per input unit. This is set to accelerate as the industry begins to cope with 3D prestack depth domain imaging as already discussed, which requires some two to three orders of magnitude increase. Due to the interpretive nature of processing and the required number of iterations, the increase may be more than shown here.
Fig. 5 [20252 bytes] is historical and applies to time domain processing. It results from combining the previous slides and shows that computer power increases matched the seismic industry volume and intensity increases during the 1960s, '70s, and '80s and actually exceeded them from 1990 through 1994.
This has resulted in the ability to process data on board seismic vessels, almost as fast as the acquisition data rate, and has led to genuine changes in the way the industry works. If we were to extend this figure into depth-domain processing at the present, the time to process 1 km of seismic data would increase again.
Depth imaging requirements
In seismic processing, the key problems that must be solved are wavelet processing and imaging.12 Wavelet processing requires much data management, moderate computation per recorded word, and relatively little human interaction. On the other hand, imaging requires large computation per recorded word, moderate data management, and intensive human interaction for velocity analysis and modeling.
Due to limitations of time and cost, 3D imaging has been done traditionally by accuracy-compromising techniques that rely upon a 3D stack to reduce the data volume prior to the compute-intensive 3D migration step. In recent years it has become apparent that in some areas the goals of computer-aided exploration and reservoir management will only be met by taking the prestack data directly into the migration step via 3D prestack depth migration.13
The importance of 3D prestack depth migration has been highlighted by the 1993 subsalt discovery (Mahogany) by Phillips, Amoco, and Anadarko in the Gulf of Mexico,14 15 which kicked off a new play. In this play, prestack depth migration is necessary because the objective structures lie below the salt, where they cannot be imaged properly with conventional 3D time migration. This technique, however, requires huge amounts of computation and is so sensitive to the provision of an accurate velocity model that it requires innovative tools for interactive migration velocity analysis, such as ProMAX MVA.
Fig. 6 [27916 bytes] shows the computing requirements caused by the progression of better 3D imaging techniques beginning in 1980. For each of the five steps in the progression, we have estimated the number of words of input data, the number of floating-point operations per input word, and the product of these two quantities, which is the total demand for floating-point operations. All three of these quantities are normalized to early-1980s requirements, when the standard for imaging was poststack time migration preceded by a conventional stack, which is referred to as "Non-steep dip."
The input data sizes increase because of improvements in data collection techniques that allow more 3D data to be acquired economically. For more accurate imaging, these larger data volumes are required to address the needs for larger migration aperture, deeper images, larger offsets, and finer sampling. This figure shows a demand for two to three orders of magnitude more power than was used routinely 15 years ago. And this estimate does not even count the cost of iterating to carry out the 3D migration velocity analysis (MVA).
Parallel processing history
Because of the need to shorten turnaround times drastically for very compute-intensive tools such as 3D imaging, parallel computing has had a strong appeal to 3D seismic processors for at least a decade.
In the early 1980s, the seismic industry had established that it could use symmetric multiprocessor (SMP) vector supercomputers to good advantage. However, such machines have proved to be difficult to scale economically beyond roughly 10 processors without losing reasonable system throughput. This lack of scalability results from the shared memory and shared input/output (I/O) architectures of SMP systems. This behavior is illustrated in Fig. 7 [18254 bytes]. In addition, these machines were based on expensive vector technology that began reaching performance saturation in the late 1980s.
The 3D seismic industry also began experimenting with massively parallel computers (MPP) in the mid-1980s. Yet early efforts with MPP systems were not very successful, technically or economically. As a result, very few MPP systems were put in production usage in a few companies, and most of them could not significantly outperform the vector supercomputers at the time. What went wrong?
First of all, these MPP systems employed proprietary processors that were weak in individual performance (roughly 10 megaflops) relative to vector supercomputers (400 megaflops). This situation was compounded by the slow performance growth over time that accompanied the proprietary nature of these processors. Secondly, specialized programming and operational models required a large R&D effort to develop or port code for these systems. This made it impractical for more than a few specialized processing steps (e.g., migration) to be developed for use on this type of parallel system. Finally, there were very limited I/O capabilities with these systems.
Scalable parallel computers
The advent of powerful second-generation RISC processors for the UNIX workstation marketplace has given the computer industry a second chance to take advantage of parallelism. The RISC workstations began appearing in the early 1990s and soon demonstrated that RISC processors could give performance in the same class as supercomputer processors (Fig. 8 [21647 bytes]). This happened because the intense competition among workstation vendors and the volumes of systems caused a performance growth curve that quickly approached the performance of the vector supercomputers. And, of course, the same competition led to a price-performance relationship that was much better than those of supercomputers and proprietary MPP systems.
A few years after the appearance of these powerful RISC workstations, a new type of parallel system built out of the same RISC central processing units (CPUs) began to appear. We call these scalable parallel systems (SPS). An example, the IBM SP-2, is shown in Fig. 9 [25865 bytes].
In this system, the concept of distributed-memory parallel has been taken even further. Each processor in this parallel system is a fully capable UNIX system, with many hundreds of megabytes of memory, several gigabytes of local disk, and other optional resources such as network adapters and tape devices. The processors are connected by a high-performance multistaged switch that permits many simultaneous pairwise memory-to-memory data transfers among the processors using the standard transmission control protocol/Internet protocol (TCP/IP).
In contrast to the traditional massively parallel computers, this type of scalable parallel computer has the following advantages for 3D seismic processing:
- The performance of each RISC processor in the parallel system is comparable to a traditional vector supercomputer.
- The programming and operational models are flexible.
- I/O is as scalable as computation.
- Programming paradigms for parallel and other system-related software are standardized.
- Hardware and system software are highly reliable.
New processing approach
This new type of parallel system gives us the option of pursuing a new approach to 3D seismic processing that differs from both the MPP approach and the SMP approach. This new approach combines "batch parallelism" (submitting a set of independent seismic jobs, each operating on a different dataset and running on a separate processor) with "monolithic parallelism" (submitting a single job that runs on multiple processors). This approach dramatically reduces the time required to put a parallel system into production and allows the system to be utilized efficiently.
In a modern context, batch parallelism can be implemented by submitting a large set of complete jobs to a parallel system with a large number of distributed-resource processors. This way, we can take seismic code that runs on a workstation and immediately use it on a parallel system without any changes. Most of the 3D seismic tools except for 3D imaging are well suited, tested, and tuned for this computational model.
The challenges for a scalable parallel system to be able to do a good job on batch parallel are: 1) a parallel batch job queuing system to maintain load balance and efficient allocation of resources across all processors, and 2) a distributed transparent I/O system, especially tape, to achieve scalable delivery of the data to all processors. An example of such a software system for distributed tape I/O is IBM's NetTape product, which is being integrated into ProMAX to provide just such a capability.
Monolithic parallel
Monolithic parallelism refers to seismic tools for which it is impractical to split up the 3D processing task into a series of totally independent tasks or jobs. It is implemented by submitting a single (monolithic) job that divides the work among many parallel processors and I/O streams. This job manages all the parallel processors, communication, and data involved during the execution on the parallel system.
A handful of 3D imaging applications-e.g., dip moveout (DMO), poststack time/depth migration, prestack time/depth migration, and MVA-have been the driving forces behind the need for monolithic parallel seismic computing since the mid-1980s. Each such processing tool needs special re-engineering to transform it from serial code to parallel code.
The technical challenge of monolithic parallelism is to obtain reasonable total system throughput from the portion of the parallel system on which the parallel job is run. The key to throughput for a serial system is balancing of processor speed, memory usage, and I/O speed. The needs of a monolithic parallel tool extend beyond those of a serial tool to require parallel algorithm decomposition, load balancing, and interprocessor communication. The interaction among the three original factors plus these three new factors presents many software engineering challenges.
Parallel performance
Since these issues have been treated in detail elsewhere,12 16 17 we will show only some parallel performance results in this section. Fig. 10 [22139 bytes] shows parallel performance measurements for a Hale-McClellan poststack depth migration on a 16-node SP-2 on a 175 sq km 3D survey.
Fig. 11 [22559 bytes] shows measurements on 3D prestack depth migration by the Kirchoff summation technique.18 This algorithm has been shown to scale very well up to 64 nodes on the SP-2, reducing the run times of large jobs from weeks to days on a full 30 sq km imaging cube.
The measurements were made by Lin Zhang on a 3D dataset acquired with a 240 channel instrument. There were about 0.5 million input traces. The output image was a cube containing 118 million amplitudes. The algorithm was run in Poughkeepsie, N.Y., on a large SP-2 with Thin-2 nodes. The benchmark results demonstrate that the theoretical linear scalability of the SP-2 obtains all the way up to 64 nodes.
Implications for industry
The need for new technologies and computing power to solve today's 3D seismic processing challenges is strong. Full-size 3D prestack depth migration and MVA coupled with velocity and structural model building are critical technologies for 3D seismic processing, both near term and long term.
As an example, consider the computing requirement for a 150 sq km 3D survey requiring full 3D prestack depth migration within an elapsed time of 3 months. According to the measurements presented above, this will require 160 supercomputer-class processors working in "monolithic parallel" fashion to form the full image with but a single choice of the velocity model. Parallel systems with this capacity are on the computer market, but we have not seen many companies using such a system in their 3D seismic processing.
We have been discussing computing needs based on a traditional processing paradigm in which the primary user of seismic processing software is a specialized processing geophysicist. The current trend toward multidisciplinary, computer-aided exploration and reservoir management will drive new 3D seismic processing technologies and computing power even further.
For example, in order for a geologist-interpreter to be part of an integrated workflow with a processing geophysicist, the turnaround time to observe "what-if" scenarios for migration velocity modeling should be reduced to a few hours. Without aggressive exploitation of large numbers of processors applied to one task, the "what if" is likely to take weeks, effectively eliminating the possibility of this type of workflow integration.
Impact of 3D seismic
3D seismic data have had a profound effect on the exploration and production business worldwide. This is because 3D seismic images provide a clear definition of the subsurface for the first time.
Oil companies are looking for low-risk ways to increase reserves. The impact of 3D seismic technology has been to increase reserves and, in so doing, to reduce finding costs (for exploration) and development costs (for production) by improving the drilling success rate dramatically. In fact, the impact of 3D seismic data extends beyond drilling into location and sizing of production platforms (offshore) and surface production facilities (onshore). Increasingly, fields with declining production profiles are being rejuvenated with 3D seismic.
There is much industry interest in looking at time-lapse 3D seismic as a tool in reservoir monitoring. This is a growth area of research at present which is likely to lead to commercial work over the next few years and which could have implications for the compute requirements of the industry.
Also, 3D seismic data have had an important impact in integrating technologies, because all disciplines-i.e., geophysics, geology, and reservoir engineering-are beginning to use 3D surveys as a basis for reservoir modeling during the life history of each field. In many cases, as fields are produced, reservoir models will be updated continuously based on all the field data, including repeat 3D surveys.
There are two reasons that scalable parallel computers are so suitable for 3D seismic processing: 1) to save money compared with traditional SMP vector supercomputers, and 2) to exploit new exploration and reservoir management technologies which cannot be done on SMP supercomputers.
We have argued that there is a place for parallel systems that can scale from two processors to hundreds of processors and that can exploit simultaneously the two natural types of parallelism in 3D seismic: "batch parallel" and "monolithic parallel." Such systems are now becoming widely available to the exploration and production industry and promise an exciting future for both exploration and reservoir management applications of advanced 3D seismic processing.
Acknowledgment
The authors would like to thank Wes Bauske, Luke Liu, and Lin Zhang for their contributions of material and ideas reported in this article.
Some of this material has been sourced from a paper contributed by E.O. Nestvold to a 3D Seismic Atlas entitled Applications of 3D Seismic Data to Exploration and Development, published by AAPG/SEG, October 1996. See references.
The authors also thank BP Exploration (Alaska) and Arco for their permission to reprint Fig. 1.
References
1. Nestvold, E.O., Nelson, P.H.H., "Explorers still hope to improve on 3D seismic's wealth of data," OGJ, Mar. 16, 1992, pp. 55-61.
2. Nestvold, E.O., Nelson, P.H.H., "3D Seismic: The Promise Fulfilled," PETEX 92, London, Conference Proceedings, pp. 75-79.
3. Nestvold, E.O., "3D Seismic: Is the Promise Fulfilled?" The Leading Edge, Vol. 11, No. 6, 1992, pp. 12-19.
4. Greenlee, S.M., Gaskins, G.M., Johnson, M.G., "3D seismic benefits from exploration through development: An Exxon perspective," The Leading Edge, Vol. 13, No. 7, 1994, pp. 730-734.
5. Moody-Stuart, Mark, "Opportunities from partnership," The Leading Edge, Vol. 12, No. 4, 1993, pp. 269-273.
6. Pink, Mike, "Exploration and appraisal technology - maximising rewards by integration," Offshore Northern Seas 1992, Stavanger, Norway, Aug. 25-28, 1992.
7. Rijks, E.J.H., "The use of and experiences with 3D seismic in Shell," VII Venezuelan Geophysical Congress, Caracas, Sept. 4-8, 1994.
8. Speers, Rowland, Dromgoole, Peter, "Managing uncertainty in oilfield reserves," Middle East Well Evaluation Review, No. 12, 1992, pp. 30-41.
9. Baird, Euan, "New technology in support of improved oil recovery," Offshore Northern Seas 1992, Stavanger, Norway, Aug. 25-28, 1992.
10. Nestvold, E.O., "The Impact of 3D Seismic Data on Exploration, Field Development, and Production," in Applications of 3D Seismic Data to Exploration and Development, AAPG/SEG, October 1996, pp. 1-12.
11. Nestvold, E.O., Jack, I.G., "Looking Ahead in Marine and Land Geophysics-A Conversation with Woody Nestvold and Ian Jack," The Leading Edge, Vol. 14, No. 10, 1995, pp. 1061-1067.
12. Su, C.B., Black, Jim, "Practical, Scaleable, Modern 3D Seismic Processing," The Leading Edge, to be published.
13. Weber, David J., Ratcliff, Davis W., "Seismic advances changing image of the subsalt," American Oil and Gas Reporter, Vol. 37, No. 2, 1994, pp. 49-55.
14. George, Dev, "Technology that led to Mahogany driving global exploration of salt bodies, and "Reliable subsalt surveying requires special 3D acquisition techniques," Offshore, January 1994, pp. 30-34, pp. 35-36.
15. Moore, Dwight "Clint," Brooks, Robert O., "The Evolving Exploration of the Subsalt Play in the Offshore Gulf of Mexico," Gulf Coast Association of Geological Societies Transactions, Vol. XLV, 1995, pp. 7-12.
16. Black, J.L., Su, C.B., "Networked parallel seismic computing," Proceedings of the 1992 Offshore Technology Conference, pp. 169-176.
17. Liu, L., "3D pre-stack Kirchoff migration-parallel computation on workstations," 63rd SEG Annual Meeting, Washington, Expanded Abstracts, 1993, pp. 181-184.
18. Zhang, L., "An implementation of 3D prestack depth migration on distributed memory systems," SEG 65th Annual International Meeting, Houston, Expanded Abstracts, 1995, pp. 172-175.
The Authors
E.O. "Woody" Nestvold joined IBM Corp.'s petroleum industry polutions unit in June 1995 and is currently the exploration and production research partnership executive. Before joining IBM, he was senior geophysics consultant for Schlumberger Oilfield Services. He joined Schlumberger in 1992 after retiring from the Royal Dutch/Shell Group, where he was employed since 1962 in various geophysics research and operational positions in the U.S., Australia, and The Netherlands. His positions with Shell included chief geophysicist, head of E&P processing at the Rijswijk research lab, and head of geophysics and topography worldwide in The Hague. Nestvold received a BA from Augsburg College, Minneapolis, and MS and PhD degrees from the University of Minnesota.
Chen-Bin Su is a consultant scientist for IBM's petroleum industry solutions unit, working on parallel research and development in the parallel and geoscience group. He has been with IBM since 1989. He received a BS in geophysics from National Central University in Taiwan and an MS in geophysics from the University of Missouri.
James L. Black has been director of seismic processing software for Landmark Graphics Corp. in Englewood, Colo., since early 1996. Prior to that he managed the parallel and geoscience group in IBM's petroleum industry solutions unit. He had been with IBM since 1989. Previously, he directed 3D seismic imaging research and development for Geophysical Service (GSI). He received a BA in physics from Rice University and a PhD in applied physics from Harvard University.
Ian G. Jack is geophysical advisor and research and development project manager for BP Exploration. He has been with BP since 1978, first working in the technical services division as an acquisition specialist then becoming manager of the acquisition services branch in 1982. Prior to BP, he spent 10 years with Geophysical Services (GSI), working in seismic data processing and seismic software and systems development. He received a BS in physics from the University of St. Andrews, Scotland.
Copyright 1996 Oil & Gas Journal. All Rights Reserved.