How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
Image processing algorithms usually consume a lot of computing resources. In many cases the continuously growing performance of CPUs found in powerful PCs is sufficient to handle such tasks within the specified time. However, leading vendors of image processing hardware and software are constantly on the search for faster ways of improving speed beyond that possible on the PC's CPU.
Typical methods of increasing speed in image processing include the distribution of the computing tasks between multiple multi-core processors, or also the use of specialized FPGAs. Each of these technologies has its own advantages and disadvantages, but all have one aspect in common. They generally do not use the fastest available processor in the system optimized for imaging algorithms—the processor on the graphics cards, also known as the GPU (Graphical Processing Unit).
These "racers" among the processors have an incredible development history. The evolution has been principally driven by the gaming industry, where the requirements demanded of the graphical representation of game scenes and animations have greatly increased. Sales of millions of game consoles have contributed to the demand, resulting in large numbers of GPUs and corresponding profits to further boost the development of graphics components. Other industrial sectors now are reaping the benefits, including image processing.
Graphics processors outperform other imaging-acceleration methods in many technical aspects, even compared with the fastest available FPGAs (see Table). For example, they are clocked at rates 10 to 20 times faster than that of typical FPGAs, so that in combination with larger memory options can achieve data throughput rates of up to 500 times greater than those of standard FPGAs.
However, these increased speeds are not fully available to image-processing users—the outsourcing of the algorithms to the GPU causes a delay in the data flow, from image capture to data processing. Regardless of this effect, various analyses of intense computing operations indicate a rise in performance by a factor of 2 to 10 when using a GPU in place of a CPU, while the CPU can then be used simultaneously for other tasks.