Advanced Imaging

AdvancedImagingPro.com

   

Advanced Imaging Magazine

Updated: January 12th, 2011 10:01 AM CDT

Night and Day

Electronic eyes that see in the dark and daylight
Figure 1
© e2v
Figure 1: The lighting dynamics of day and night based on the characteristics of the human eye. Also shown is the wavelength response of typical human vision in photopic (daylight) and scotopic (night) conditions.
Figure 2
© e2v
Figure 2: The cross section comparison of Front Side Illumination and Back Side Illumination processes show the more direct coupling of light to the pixel.
Figures 3a & 3b
© e2v
Figures 3a and 3b: The image distortion of a rotating fan captured with the Electronic Rolling Shutter and Global Shutter.
Figure 4
© e2v
Figure 4: The quantum efficiency measured on a front side illuminated G/S imager with a 5.8 µm pixel size. A BSI version would be closer to 100 percent.
Figure 5
© e2v
Figure 5: The L3 EMCCD sensor and camera for imaging below starlight lux levels.
Advertisement

By Gareth Powell, e2v

From dusk to dawn ...

Cameras that monitor high security installations, whether military or civilian, like border control, or airport perimeter security are expected to provide their essential service by day and by night. It follows also that a peak in vigilance often is required at night, where under the obscurity of extremely low light levels, most intrusion events are likely to occur. Cameras can be of the fixed type, fixed but with mechanical or electronic pan/tilt/zoom, or mounted on manned or unmanned vehicles or aircraft; therefore mobile. The same application needs are shared by civilian private or public vehicles that are emerging with night-vision or IR imaging systems to detect pedestrians.

With reference to the human eye, night vision (scotopic) is mainly governed by the response of rods, which are able to detect single photon flux but without any color information. This is why human night vision appears to be monochromatic: "at night, all cats are grey" (see Figure 1).

... and from dawn to dusk

In many cases, the very same cameras must work in the diverse and wide dynamic conditions of daylight, which presents its own unique challenges to electronic imaging systems. Extracting exploitable images from shaded parts of scenes when a majority of the image is floodlit by natural or artificial sources demands a wide dynamic sensing element. Very high video frame rate imaging, even in relatively high lighting environments, paradoxically requires characteristics closer to that of low-light imaging because extremely short integration times need sufficiently high sensitivity combined with low noise.

All-lighting cameras are effectively non-existent. The extremes of brightest midday sunlight to darkest cloudy nighttimes necessitates specialized systems for low-light to be deployed in conjunction with the slightly less challenging high and wide lighting cameras. Again with reference to the human eye, day vision is photopic; governed mainly by the response of cones which are sensitive to color.

Figure 1 puts perspective on the magnitude of the day and night light scale in lux—a photopic unit of light related to the characteristics of the human eye. Also shown is the wavelength response of typical human vision in photopic (daylight) and scotopic (night) conditions.

The Silicon Under the Lens

While the lens plays a pivotal role in supplying light to the electronic image sensor below it, the imperfections of transforming light photons to electrical signals becomes the major limiting factor. To achieve imaging in the lowest light, it is essential to minimize noise and maximize signal sensitivity characteristics of the sensor. Our preoccupation now becomes signal to noise ratio (SNR).

Similar to the lens aperture, the sensor's individual pixel apertures or pixel surface area has a direct influence on its sensitivity. The smaller the pixel, the fewer photons it collects during its integrating period, and therefore low-light performance requirements impose lower limits on pixel size. Cost, optics formats and image resolution requirements combine to force upper limits on the pixel size. As a subset of this general geometrical relationship to low-light performance, process and pixel architecture itself influences the photon to electron conversion efficiency—quantum efficiency (QE)—and the effective area of its photon conversion zone, termed the pixel fill-factor, which is expressed as a percentage of the pixel size.

Noise sources in the imager, readout and interfacing electronics are multiple and varied, some of which have a direct relationship with temperature, like dark current for example. Another temperature-dependant noise contributor in the pixel is a result of kT/C effects that leave random residual voltages on sensing and storage nodes during the necessary clocking and resetting periods prior to integration.

'T's in a CMOS Pixel

Pixel architecture frequently is referred to by the number of transistors per pixel. Most CMOS imagers tend to use an electronic rolling shutter, which is the most economic to integrate and can be realized with as little as three transistors (3T). While commendable for its simplicity, the 3T pixel architecture suffers from a higher pixel generated temporal noise from kT/C (or thermal) noise in the circuit, which cannot be simply removed.

The 4T architecture performs a Correlated Double Sampling (CDS) to remove the kT/C temporal noise. This architecture also enables transistor sharing schemes between pixels to reduce the number of effective transistors per pixel to less than two. Evidently, fewer transistors in the pixel free up more area for the photosensitive part or fill factor. As previously stated, we are concerned with SNR, so by offering both fill factor optimization and noise reduction, the shared 4T architecture is widely exploited. Different ways to more directly couple light into the pixel include maximizing both fill factor and quantum efficiency have been developed for low-light imaging.

One such example currently gaining traction is Back Side Illumination (BSI). This technique involves flipping the imager wafer over and thinning the bulk silicon down to thickness of the photodiode, and then continuing post processing (color filters, micro lens, etc) on the thinned side of the wafer. As a result, the fill factor of the pixel is greatly increased since the photodiode now resides almost on the surface of the pixel. In the conventional Front Side process, the photodiode is buried beneath metal layers that separate the active photon collection zone from the light entering at the surface of the pixel (see Figure 2). Due to the relationship between the horizontal pixel (x, y) aperture and the optical stack height (z) above the photodiode, it follows that the fill factor increase is higher for smaller pixels than for larger ones. However, quantum efficiency benefits still are achieved by BSI even for larger pixels even though the fill factor improvement is less. This is attributable to elimination of photon scattering and optical shielding effects, other side-effects of the metal layers. Further improvements in spectral response can be achieved by using broadband or band-pass optimized anti-reflective coatings on the surface of the sensor array. As a case example, the typical degree of quantum efficiency improvement offered by a BSI CMOS imager with a 2.7 µm x 2.7µm pixel is more than double that of the equivalent front side illuminated imager (see Figure 2).

Although the mass produced CMOS imager industries are claiming BSI to be a "revolutionary" technology, it is interesting to note that this BSI technique has been used in high-performing CCD imagers for more than 20 years.

1 2 next


Subscribe to our RSS Feeds