Learn more about Hart Energy Conferences
Get our latest conference schedules, updates and insights straight to your inbox.
Techniques and hardware for acquiring seismic data have evolved over the last 50 years, while computing advances have revolutionized data processing and interpretation.
鈥淓verything that we鈥檝e done in seismic over the last 100 years has been an evolution. And we do what we do because we can鈥檛 do it correctly,鈥� said Sam Gray, who recently retired from his post as chief scientist at CGG. 鈥淓verything that we do is kind of an approximation, and we鈥檙e getting closer and closer and closer to being able to do exact science.鈥�
Part of that evolution was a change in the source of the sound waves that geoscientists use to map the subsurface.
David Monk, SEG DISC instructor, said the industry moved away from dynamite as a regular source of sound waves in the 1960s, and began to use vibroseis around 1960.
鈥淥nshore, vibroseis is pretty much the universal source,鈥� he said.
Vibroseis uses controlled vibrations to send sound waves into the ground, which are received back at the surface and heavily processed to help map the subsurface features. In the early days, Monk said, it was common to shake the vibrators half a dozen times to ensure a good shot because of the cost of processing each individual shot. Now, it鈥檚 common to record lots of shots but without so much effort.
鈥淲e鈥檝e replaced that heavy effort,鈥� he said. 鈥淭oday, the industry is typically just shaking a single vibrator one time, but we鈥檙e doing it in the same way, and the number of recording positions has gone up.鈥�
That increase in recording positions has happened both onshore and offshore, with hardware for offshore acquisition via towed streamers becoming more plentiful and longer.
Gray said CGG introduced multistreamer acquisition in the early 1970s with two or three streamers towed behind a seismic acquisition vessel.
鈥淕radually, over the next half-century, that evolved into many, many, many streamers with a crossline aperture of kilometers. Very, very wide,鈥� he said.
Adding the longer streamers helped produce better and cleaner data, he said, and imaging was limited by computing power.
Marianne Rauch, second vice president of Society of Exploration Geophysicists and TGS principal technical adviser, said one of the more recent advances in offshore seismic acquisition is the use of ocean bottom nodes (OBNs) for the placement of geophones on the seabed. One draw of OBNs, she said, is that it removes the need to factor in the water column when processing the sound waves.
鈥淚t improves the data quality,鈥� she said, and it makes the survey more flexible.
OBNs also improve the quality of 4D, or time-lapse, seismic.
鈥淥BN is fantastic because you can leave your nodes on the seafloor, and you record again after a year or half a year or two years, whatever. And this means that you will get actually one of the most accurate repeat services,鈥� she said. 鈥淚n the 4D environment, the problem is always seen that the repeat survey, they鈥檙e not really the same,鈥� but using OBN removes that concern.
And while seismic acquisition vessels increasingly towed more and longer arrays, the hardware for onshore acquisition shrank in size.
In the 1970s, the industry was using geophones that were heavy and bulky, Gray said.
鈥淕eophones were so heavy that you could only lay out so many in a day, and this limited the size of the seismic survey that you could perform,鈥� he recalled.
Over time, they have been miniaturized.
鈥淎 geophone is now on a chip. And, of course, that chip has to be placed into the earth, so it needs a casing,鈥� he said.
鈥淭he chip might only be a quarter-of-an-inch wide, and the casing for it is much smaller than geophones were 50 years ago. So, it鈥檚 been miniaturized and it鈥檚 been made digital. This allowed higher channel counts and higher fold.鈥�
That also enabled seismic acquisition in a variety of settings, including desert, mountain and Arctic areas.
鈥淢ountain areas are still really tough because you need mountain climbers to plant these geophones, so mountainous land acquisition tends to still be sparser than marine and arctic acquisition,鈥� Gray said.
Currently, the geophone chips are recovered, he said, but there is research into sacrificial and biodegradable units that can be left behind.
Data explosion
The digital revolution changed how seismic data was processed. Between the 1960s and 1980s, computing shifted from analog to digital, Rauch said, requiring the replacement of analog recording systems with digital systems. It also allowed better storage of data and the ability to better process the data.
鈥淚t just became much easier and more effective,鈥� she said.
At the same time, the volumes of data were increasing. In the late 鈥�70s, every recorded shot provided data from under 100 channels, Monk said.
鈥淭hat has grown exponentially, to the stage where there are crews now recording 200,000 or 300,000 channels for each shot. And by the end of 2024, perhaps 2025, it鈥檒l be a million. So, every time we record, every time we take a shot onshore, we鈥檙e going to be recording a million channels of data and a million pieces of data,鈥� he said.
The data itself, he said, hasn鈥檛 become more sophisticated or complicated鈥攖here is just more of it.
鈥淲hat do we do with all that data?鈥� he asked.
Gray said the hardware that initially enabled major seismic processing advances was the world鈥檚 first supercomputer, the Cray-1. Oil companies were among the first to take advantage of Cray-1鈥檚 computing capabilities.
鈥淚t just blew people鈥檚 minds, which was great. It revolutionized the computation,鈥� he said.
Over the years, Rauch said, high-performance computing became another game-changer. What once had required many hours to process could now be processed in an hour, she said.
鈥淭he technology has really, really moved fast and from very slow computers to huge and powerful computers,鈥� she said.
Powerful tools able to process the data, she said, changed the seismic processing world. For example, the use of machine learning has enabled noise reduction and enhanced signals, she said.
Gray noted that access to high-powered computing changed the way the industry migrated data.
鈥淏efore we had big computers, migration was possible, but it was torture,鈥� he said.
Computational power also enabled full-waveform inversion (FWI), which revolutionized velocity model building for migration. The latest FWI imaging produces better seismic images and next-generation elastic FWI, which uses the full elastic wave equation, has in the last year produced the most reliable lithology estimates to date, he said.
The type of surveys also evolved over the years, moving from 2D to 3D, and then adding in the time component for 4D seismic. 2D surveys yield a vertical cross-section image of the subsurface, while 3D surveys generate a cube, or stacked image of the subsurface.
While the concept for 3D seismic had existed for a while, Rauch said, 鈥渋t became actually practical鈥� in the 1980s. 鈥�3D was really a game-changer because now we could image the world as it more or less really is.鈥�
And visualization rooms, which gained popularity in the early 2000s, took seismic data out of the two-dimensional world of paper and helped geoscientists see the data in space.
Gray said the visualization center made it possible to share enormous data volumes with many people and help them 鈥渦nderstand structurally鈥� the data. It was, he said, an alternative to printing the images out on paper.
As sophisticated as the processing has become, the price to process hasn鈥檛 changed that much since the early days of computing, Monk said.
鈥淐ompute capability has been coming up exponentially and the cost of compute has been coming down,鈥� he said. 鈥淭he cost to actually process seismic data has stayed almost constant over the entire time. But what we are doing with data is far more complex, and we are doing it on far more data.鈥�