Advanced Imaging

AdvancedImagingPro.com

   

Advanced Imaging Magazine

Updated: January 12th, 2011 10:01 AM CDT

Going to the Well

Accelerated number crunching makes oil and gas exploration more efficient
Integrated computation and visualization workflow
© Mercury Computer Systems
Integrated computation and visualization workflow using NVIDIA Tesla and Open Inventor from Mercury Computer Systems. Mercury Computer Systems
Input data
© Mercury Computer Systems
Input data: Seismic amplitude volume managed and visualized using Open Inventor.
Output data
© Mercury Computer Systems
Output data: Seismic attribute volume managed and visualized using Open Inventor.
NVIDIA's Tesla GPU server.
© NVIDIA
NVIDIA's Tesla GPU server.
Advertisement

By Barry Hochfelder

In a typical acquisition, it takes 105 shots to sample the earth every 25 meters, according to CGGVeritas. That's where the 10TB of raw data comes from. Then, the company says, every byte of data will need 106 operations for analysis.

NVIDIA refreshes its GPU every 12 to 15 months. That is driven by the gaming industry. Gamers want more realism; they want to see skin detail, hair detail. That refresh rate and the needs of the oil and gas industry—and others that require huge amounts of data to be crunched—led to the development of CUDA, which was released in June 2007. Oil and gas simulations used to go on for weeks at a time. By allowing geophysicists to get their numbers in a day, rather than weeks or months, saves millions of dollars.

The Number Crunch

Previously, it was difficult to combine GPUs in a sort of "supercomputer." You couldn't really stack them in a rack mount and build high-performance systems, says Mike Heck, Technical Director of Mercury's Visualization Science Group. "Now you can have high-performance graphics boards and computing acceleration. You also can get an external box with two Tesla boards. It connects to the computer—it's really an extension of a PCI express bus—but allows twice as much memory and computing power."

The addition of CUDA, he adds, allows programmers to write algorithms for the GPU in C code, which is more familiar to them. They can implement algorithms that work on the graphics board and accelerate computing.

"You've got to manage out-of-core data (data sets that are too big for memory) to bring in the data you need. You might as well do management on the fly, computing on a graphics board," Heck says. "But you also can move to a Tesla board and do your computing and bring it back for the work flow."



Subscribe to our RSS Feeds