How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
Achieving Optical Zoom by Means of Optical Distortion
This paper presents a novel approach to obtaining optical zoom by means of optical distortion 4-5. One solution provides standard-range zoom and is especially suited to mid- and low- range cell phone cameras where cost and size are important metrics. Continuous zoom is achieved while retaining all the information in the original image so there is no reduction in resolution. In the first solution, the optical assembly has no moving parts; there are no changes in lens shape or refractive index. Since the lens parts are fixed, the product is very robust and therefore well-suited for commercial cell phone cameras. An alternative solution provides extended-range, continuous zoom. It uses the same concept as the first solution, but harnesses a dual position lens. This is a significantly less expensive, much smaller and less complex solution than a traditional zoom lens, which typically requires precise lens movement and positioning, making it suitable for feature-rich high-end cell phones and DSCs. In both cases, digital restoration is required to remove distortion from the captured image, but it does not require high-computational effort.
The basis of achieving optical zoom by means of optical distortion is to exploit the interaction between the point spread function (PSF) of an optical system and the quantized resolution of solid state imagers.
PSF and the Dependence of Resolution on the Angle of Incidence
It is beneficial to understand the behavior of the PSF of an imaging system over the whole area of a pixilated detector. Consider an on-axis point source (0° incident angle). Light gathered by the lens and imaged on the detector produces the point spread function. In simple terms, a perfect light source, no matter how small, will always appear to be a minimum limiting size because imperfections in the optical path cause the light to spread by the time it impinges on the detector. The PSF has finite support and therefore dictates the resolution of the imaging system. For example, an imaging system cannot resolve two objects after they get so close that their PSFs overlap, thus being interpreted as one bigger object instead of two smaller ones. A typical measurement of PSF of an on-axis object, magnified in color, is shown in Figure 3. The optical system comprised of three plastic lenses similar to the type commonly found in camera phones.
The PSF of a lens depends on the incident angle that a point source creates with its optical axis. As this angle increases, the PSF increases and the shape deviates from circularly symmetric. Figure 4 shows the same measurement as Figure 3, but for an angle of incidence of 25°.
Comparing the results of Figures 3 and 4, it is apparent that the far-field PSF is wider than the on-axis PSF, i.e. the system resolution decreases as the angle of incidence increases and the image gets closer to the FOV border. Typically, the difference in resolution between on-axis and the FOV border is 30 percent to 50 percent.
For a conventional solid state imager, pixel size is constant for the whole sensor area and therefore, for the whole FOV. This means there is a varying degree of mismatch between the PSF and the pixel size over the sensor area. In the central region, the PSF is usually smaller than a pixel size, which means that the resolution is dictated by the sensor and not by the optics (under-sampling condition). Contrarily, in the FOV borders, the PSF size will usually be larger than a pixel pitch so the resolution in that region is dictated by the optics and not by the sensor (over-sampling condition). Therefore, the image in a standard camera system is not spread in an optimal fashion over the sensor area and the use of the information resources is not optimal. At the periphery of the sensor, where the PSF size is larger than the pixel size, several pixels sample a single finest object detail. So for example, the image in the FOV borders can be sampled at a lower rate without information loss. The opposite condition exists in the central portion of the image, where a single pixel can potentially be illuminated by two discrete object details. In this region, it would be beneficial if the image could be sampled at a higher rate so that the pixel size will at least match the PSF.
Optical Distortion to Match PSF to Pixel Size
Optical zoom with no moving parts can be accomplished in an optical system by controlled distortion of the image so that the PSF matches the pixel size over the whole sensor area. It is based on unique and novel fixed-focus optics that provides magnification in the center of the FOV with respect to the FOV borders. Illustratively speaking, the distorted image is magnified in the center and compressed near its borders because the central region of the image occupies more of the sensor area, while the image borders occupy less, compared to a standard imaging system. Illustrations of two types of distortion that achieve this result are shown in Figure 5.
The purpose of the controlled distortion is to ensure PSF becomes smaller in the border area, compared to the PSF provided by a standard lens module. As a result, it is spread over a smaller number of sensor pixels. However, no information loss occurs if the PSF is designed to have a region of support of a single pixel. Consider now the center of the FOV. Here the image is magnified up to the point where the resulting PSF size is also about one pixel. Once again, no loss of information occurs, since the image resolution in this region has been dictated by the sensor pixel size from the beginning. Therefore, one may consider the optical distortion as a magnification mapping of the PSF to match the pixel size in the whole FOV region. Such a condition is optimal since no information loss occurs due to under-sampling and pixels are not wasted through over-sampling. To that end, the information contained in the captured image is maximal.
Standard-Range Optical Zoom System
The image obtained by a camera module, where the optics are designed to match the pixel size to the PSF over the whole sensor area, will be distorted. When unitary magnification is required, distortion correction is applied by amalgamating the information from pixels in the center of the FOV. To achieve zoom, the image is cropped, corrected for distortion and expanded to match the original image size. Since the magnification of the central image portion has been done optically and not by a digital interpolation, the magnified output image has higher resolution than comparable images produced using digital zoom. In fact, one gets an imaging system with optical zoom ability, but with the major simplicity of fixed optics assembly, with no moving parts and a lower overall cost. The only limitations of standard-range optical zoom are the range restrictions to a little over 2X and the small computational overhead for correcting the distortion.
Digital Correction of the Distorted Image
As presented in the previous section, the acquired image is distorted since its center is magnified while its borders are compressed. Therefore, the image requires digital processing for all desired zoom magnifications. However, the distortion created by the lens is known and fixed, allowing a predetermined transformation to calculate the right location of each distorted pixel in the undistorted output image. Since transformed pixel locations may fall between the fixed positions of the output image pixels, which form a rectangular grid, interpolation is required to map one to the other. Again, the interpolation kernels for each zoom magnification and for each pixel site are preconfigured.
Consider the case of unit magnification, where the whole FOV is required to reproduce the image. Because of the optical distortion, the transformed locations of the central pixels are denser, so the magnified center is compressed, while image borders are transformed and expanded. Once the exact locations are known, interpolation takes place to assign the rectangular grid pixel values in the output image. This is illustrated in Figure 6. Figure 6(a) shows the rectangular sensor, in which the blue region consists of pixels that finally provide the undistorted full FOV image, as well as the red region that contains redundant pixels. The transformed positioning of the pixels after the digital correction are illustrated in Figure 6(b).
For the case of higher magnification factors, the desired central portion is cropped and corrected without compression, or with less compression in accordance with the desired zoom value, in the same fashion. The distortion correction processing can be applied in the end of the image processing chain, before JPEG compression takes place.
Illumination and F# Considerations
It is known that in mechanical zoom lenses the effective focal length of the system increases with the zoom. Therefore, the F# of the system increases with the zoom and for the same exposure time and aperture size, an image at higher zoom appears to be darker.
This is not the case for zoom derived by optical distortion. The distorting lens has a focal length that monotonically decreases with the image field. The aperture decreases with the field as well, but in a slower fashion. Therefore, for each magnification, which corresponds to a certain image field, the F# provided by the distorting lens is smaller than the F# for the same magnification in a common zoom lens. This results in exposure times that are independent of the zoom magnification and do not decrease the illumination.
The special properties of the F# results in an uncommon relative-illumination behavior, which is compensated for by the digital zoom lens algorithm An example, given in Figure 7, is for a lens with maximum magnification of about 2.