How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
Increasing industry interest in color images is further compounding the growth in data storage requirements. Color images typically add three components to the image intensity of monochrome images—red, green, and blue color saturation—resulting in a data requirement as much as four times the size of comparable monochrome images. The vision system could derive the intensity information from just the three color components, but the computation required typically creates an unacceptable load on a system’s processing capacity.
MEMORY IS THE BARRIER
Performance is not as much of an issue. Today’s PCs use state-of-the-art processors that have as many as four processor cores on chip and are capable of handling data at rates of 600 to 700 Mbytes/sec. The advent of PCIexpress gives system backplanes the capacity to transfer data at 5 Gbytes/sec. These speeds are typically high enough to handle images as fast as they are acquired.
The machine vision process, however, works with pixels in blocks rather than one at a time. Thus, vision system inspection rates are based on average rather than continuous processing speeds. The system acquires an object’s image, begins processing, and finishes while the next object moves into the inspection area.
To achieve maximum processing efficiency, however, the vision processor must buffer data in its local memory (on-line) so that it does not have to wait for data to load. An external data storage device such as a disk drive is too slow to keep up with frame rate requirements, especially because such storage requires data to move twice: once to the drive then later to the vision system. In addition, disk drives have an overhead penalty that arises because they use a file structure for data access not the first-in, first-out (FIFO) data access that vision systems require. Finally, given the image size increases now occurring and the latency of image storage and retrieval, the drive would need to offer terabytes of storage to provide adequate buffering. Drive systems of that size would be cost prohibitive.
On-line data buffering does not suffer these drawbacks. The system does not need to move information twice and is easily configured to store and retrieve data using FIFO access with no overhead penalty. Memory cost also is not a major issue. The performance of on-line storage typically is fast enough that buffering requirements reduce to two images at most (one incoming and one in processing).