Advanced Imaging

AdvancedImagingPro.com

   

Advanced Imaging Magazine

Updated: January 12th, 2011 10:01 AM CDT

Optimizing Performance

Getting the most out of your multi-core CPU or GPU requires the right software
Speed up factor chart
© Matrox Imaging
Adding cores to a system only benefits certain operations. Convolutions can be greatly accelerated with additional cores. (Benchmarking performed on 1K x 1K x 16-bit image using two 2.33GHz Intel® Core 2 Quad processors each with 2MB cache.)
Speed up factor vs. 3GHz Intel Xeon chart
© Matrox Imaging
Processing primitives can be greatly accelerated with GPU processing. (Benchmarking performed on 1K x 1K x 16-bit image using two 2.33GHz Intel® Core 2 Quad. 2 cores are used for addition while eight cores are used for convolution, LUT mapping and warping. Overhead of getting data to and back from GPU board is not taken into account.
Advertisement

By Sarah Sookman, Matrox Imaging

Your tried-and-true machine vision application works well on the "old" single-core machine. But with the deluge of affordable multi-core PCs equipped with gaming-quality graphics processing units (GPUs) it might be tempting to deploy your robust imaging applications on them. With better performance, you can offer a faster product to your own customers. Since you're paying for the hardware, you might as well get the most out of it. Sounds good in theory, but in practice, an imaging application deployed on a single-core system will not perform twice as fast on a dual-core machine. This article first defines a system's available processing resources, namely, the CPU, GPU, and FPGA. Then we'll cover the ins and outs of optimizing your system's performance, and describe how software lets you do it.

Why use multi-core?

There are a several reasons to develop with multi-core CPUs or GPUs. The fact is, today's machines are equipped with more processing resources than we've ever had before. The processing resources that might be required for most machine vision applications are already in the chassis. GPUs provide a relatively untapped resource for accelerating or offloading image processing functions, and multi-core CPUs are common. So not only can advanced and complex imaging algorithms be deployed on these systems to do more work, but the systems can handle higher data rates, thereby increasing the throughput. More processing resources correlate to higher frame rates, and also can increase the number of inspected parts per minute or units per hour.

Convergence of system functions is another advantage that can be realized when deploying applications with multiple cores. For example, a single computing box can control the components of a machine vision system: imaging, automation control, the HMI, communication with back/front office systems, and archiving for compliance and traceability.

The processor landscape

Intel's first dual-core chipsets were introduced into consumer workstations in 2005. Today, quad-core and dual quad-core processor systems are common, and AMD plans to launch a 12-core chipset in 2012. Today, multi-core CPUs are the norm. Why? Chip manufacturers found that increasing a chip's clock rate pushed power consumption and heat dissipation well beyond acceptable limits. Furthermore, in the context of Moore's Law (with the transistor count doubling in a fixed period of time), they now have reached the transistor limit—they have more transistors available than is useful for a single core. So they built dual-core systems.

But there's more to the CPU story than speed. Historically, imaging performance soared after the introduction of Intel's MMX extensions. These extensions were built on SIMD (single-instruction multiple data), a technique to compute in parallel streams. Today's challenge is finding a software package that can take advantage of functional parallelism, even with the included SIMD instructions.

1 2 3 4 5 6 next


Subscribe to our RSS Feeds