How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
By Michael Gibbons
Point Grey Research
In the beginning—about 12 years ago—there was analog. The landscape of camera interface technology was simpler and easier to understand, with most vision systems consisting of an analog camera connected to a frame grabber. Then digital interfaces like FireWire, Camera Link, USB, and GigE were introduced, and the debate began. Now, with a plethora of devices popping up everywhere from the local Best Buy to megastores in Japan, USB 3.0 seems set to add fuel to the fire.
To understand how USB 3.0 will impact vision in the future, it’s important to understand the road that led to its development. The Universal Serial Bus is the most common serial peripheral interface in the history of computing. Present in virtually 100 percent of all computers, it is the standard for most computing peripherals such as mice, keyboards and hard drives, and sells billions of units every year. The USB standard was originally developed by Intel, Microsoft, Compaq, and others for peripheral components. These same member companies formed the USB Implementers Forum (USB-IF) to provide a support organization and forum for the advancement and adoption of Universal Serial Bus technology.
The USB 1.0 specification was released in 1996 and ran at 1.5 Mbit/s (low-speed) and 12 Mbit/s (full speed). While useful for lower data rate peripherals, it was not until the USB 2.0 specification (high-speed USB) was introduced in 2001 with a maximum raw data throughput of 480 Mbit/s (60 MByte/s) that the standard became useful for applications such as video and data storage. At a time when many industrial camera manufacturers defined themselves by the interface they supported, some companies saw a chance to differentiate themselves using this new technology. This led to the birth of the first USB 2.0 digital video cameras. Of course, like any interface, USB 2.0 has its strengths and limitations when applied to machine and computer vision. It provides sufficient bandwidth for many applications, has built-in support on virtually every computer, and is typically very low cost (no frame grabber required). Conversely, USB 2.0 bandwidth may not be enough for many of the new high data rate image sensors becoming available, provides limited power, and is not as efficient in its signaling and data transfers as some other digital interfaces.
USB 2.0 employs a host-directed (aka master-slave) architecture where every transaction either goes to or comes from the master, which is typically the host computer. Communication is half-duplex, allowing data to flow in only a single direction at a time. The host initiates all data transfers, handles all arbitration functions, and dictates data flow to, from and between the attached peripherals. This adds additional system overhead and can result in slower data flow control. In addition, USB 2.0 uses polling as the primary signaling method, meaning a USB 2.0-enabled camera must be constantly polled by the host software to check for activity. The USB 2.0 specification offers two different data-transfer mechanisms: bulk and isochronous. Bulk transfers are guaranteed delivery, but not bandwidth, and provide error correction in the form of a CRC field on the data payload. They also have error detection/re-transmission mechanisms that ensure data is transmitted and received without error. Bulk transfers have a theoretical maximum rate of approximately 40 MByte/s, which is more than sufficient to handle the resolutions and frame rates of the more common 1/4-inch, 1/3-inch or 1/2-inch image sensors on the market. Isochronous transfers, on the other hand, are guaranteed bandwidth, making this mechanism well-suited to the transmission of real-time data. Isochronous provides low-latency data distribution, enables the latency of that data to be deterministic, and provides error detection via a CRC. However, USB 2.0 isochronous transfers are limited to roughly 24 MByte/s.
Fast forward to 2008. The USB 3.0 Promoter Group, comprised of HP, Intel, Microsoft, NEC, ST-NXP Wireless, and TI, finishes development of the USB 3.0 specification and transitions its management to the USB-IF. The design goal was simple: build on the strengths of USB 2.0 while addressing many of its limitations. The USB 3.0 specification increases raw data throughput to 5 Gbit/s (640 MByte/s). Though 8b10b encoding sets a practical limit of about 500 MByte/s, this still represents a substantial performance improvement over USB 2.0. USB 3.0 adds five wires for a total of nine wires in the connectors and cabling and utilizes a unicast dual-simplex data interface that allows data to flow in two directions at the same time; an improvement over USB 2.0’s unidirectional communication model. The USB 3.0 specification preserves the legacy bulk, isochronous, control and interrupt transfer types, but significantly increases isochronous throughput with three bursts of 128 MByte/s per service interval, for a total of 384 MByte/s. The USB 3.0 architecture has many similarities to PCI Express (PCIe), and although there are obvious functional differences between them, they both aim to increase bandwidth and lower power consumption. USB 3.0 is still a hosted device protocol and maintains much of the existing USB 2.0 device model. One important change, however, is the signaling method. The USB 3.0 specification uses asynchronous signaling, which allows a device to notify the host when it is ready for data transfer. This significantly reduces system overhead and CPU usage compared to the polling mechanism in USB 2.0. A variety of other protocol improvements, such as streaming support for bulk transfers and a more efficient token/data/handshake sequence, are designed to improve system efficiency and reduce power consumption.