How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
In a typical acquisition, it takes 105 shots to sample the earth every 25 meters, according to CGGVeritas. That's where the 10TB of raw data comes from. Then, the company says, every byte of data will need 106 operations for analysis.
NVIDIA refreshes its GPU every 12 to 15 months. That is driven by the gaming industry. Gamers want more realism; they want to see skin detail, hair detail. That refresh rate and the needs of the oil and gas industry—and others that require huge amounts of data to be crunched—led to the development of CUDA, which was released in June 2007. Oil and gas simulations used to go on for weeks at a time. By allowing geophysicists to get their numbers in a day, rather than weeks or months, saves millions of dollars.
The Number Crunch
Previously, it was difficult to combine GPUs in a sort of "supercomputer." You couldn't really stack them in a rack mount and build high-performance systems, says Mike Heck, Technical Director of Mercury's Visualization Science Group. "Now you can have high-performance graphics boards and computing acceleration. You also can get an external box with two Tesla boards. It connects to the computer—it's really an extension of a PCI express bus—but allows twice as much memory and computing power."
The addition of CUDA, he adds, allows programmers to write algorithms for the GPU in C code, which is more familiar to them. They can implement algorithms that work on the graphics board and accelerate computing.
"You've got to manage out-of-core data (data sets that are too big for memory) to bring in the data you need. You might as well do management on the fly, computing on a graphics board," Heck says. "But you also can move to a Tesla board and do your computing and bring it back for the work flow."