How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
Time delayed integration (TDI) is a well known and proven method to capture images of a moving object by synchronizing the pixel sequence with the velocity of the object (or the camera). This way, TDI extends the effective exposure time without blurring the image. However, implementing TDI in CMOS technology is not an easy task since CMOS, unlike CCD, lacks an inherent mechanism for charge transfer and charge accumulation. Now, a novel architectural concept based on focal plane shutter and digital adder can equip CMOS sensors with TDI functionality—opening up a viable alternative to traditional CCDs.
When an object or visual scene is moving in regard to the image sensor, the resulting image, especially at longer exposure time, will be blurred. However, in many such situations the velocity of the object relative to the camera is known or predictable, so that it can be compensated for. This is often the case in industrial image capture, when scanning documents or inspecting goods on a conveyor. Another well known application of motion compensation is in satellite imaging of the earth’s surface.
Basically, a line sensor with its long side oriented in the direction of the movement should do the trick. But short exposure times, especially at higher velocities or weak illumination of the scene to be captured, will lead to an unfavorable signal-to-noise ratio (SNR). It therefore makes sense to choose a two-dimensional, but TDI-enabled, pixel array as shown in Figure 1. This configuration will appropriately delay the pixel signals of a given column (in the direction of the movement) and add them up in sync with the actual scan sequence. This delay-and-add operation causes the incoming light of a scene to strike all pixels—one after the other—of the column, and their serial addition greatly improves the signal-to-noise ratio.
Up to now TDI functionality was relegated to charge-coupled devices. CCD technology enables the transfer of charge packets across the sensor, which can easily be synchronized with the relative velocity of the movement. Moreover, this summing up of charges is practically free of noise.
The APS pixels of a CMOS image sensor make this task much more difficult, because they won’t allow direct charge transfer. Moreover, the light-induced charges are converted to voltage signals already inside the pixels. Their summing up can be done only off-chip.