How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
Holography has been successfully used as an optical technique since it was invented by Dennis Gabor in 1948. It differs significantly from photography, which uses lenses to image and record almost any scene that may be perceived by the human eyes in terms of the intensity of the observed optical wavefield. Holography does not use lenses to form an image. It can record and reconstruct the entire set of information associated with an optical wavefield, including the intensity and phase information. Because all recording media respond only to changes in intensity, the phase information must be converted into variations of intensity to be captured. A hologram is the interference pattern formed between two wavefields—the light scattered from the surface of the object and a coherent beam called the reference wave. This pattern is coded as bright and dark micro-fringes and usually is invisible to the human eye because of the high spatial frequencies. Figure 1 shows a playing die with dimensions of 13mm×13mm×13mm, along with its digital hologram and reconstruction.
Conventional holograms are recorded on a flat photographic plate and are reconstructed by illumination of the holographic plate via a beam identical to that which was used as the reference beam in the recording procedure. However, the development of computer technology and high-resolution CCD/CMOS devices has made it possible to not only perform the reconstruction numerically by the computer, but digitally make the recording process via high-resolution CCD/CMOS sensors. This potentially very powerful technique is known as digital holography. It offers not only great potential for fast, accurate, full-3D capture and display of objects, but also their quantitative measurement. No wet-chemical or other time-consuming processes used in conventional holography are required, meaning that digital holography can be performed in nearly real time and the entire processing time depends only upon the acquisition time of the camera and the duration of the numerical reconstruction performed in the computer. A Fresnel-Kirchhoff integral is used to mathematically describe the diffraction of a light wave at an aperture mounted perpendicular to the incoming beam. In the case of holography, the hologram can be regarded as the aperture. Therefore the numerical calculation of the Fresnel-Kirchhoff integral is of vital importance in performing the numerical reconstruction of the digital hologram. There are three terms that appear in the reconstruction result shown in Figure 1(c) and these are in accordance with the theory of classical optical holography. The zero-order term is in the center, the out-of-focus twin image is on the right side and the image of the object can be seen on the left.
The interference pattern generated by the superposition of the reference wave and the wave reflected from the object must be adequately resolved to allow us to recover the original intensity and phase information of the light coming from the object. The maximum spatial frequency that can be resolved is determined by the maximum angle between these waves. To obtain accurate reconstructed images of satisfactory quality each micro-fringe in the hologram must be represented by at least two pixels. Photographic emulsions used in optical holography have resolutions of up to 5,000 line pairs per millimeter (Lp/mm). Using these materials, holograms with angles of up to 180° between the reference wave and the object wave can be recorded, but the maximum resolvable spatial frequency of a CCD camera is only about 100 Lp/mm.
The camera we used in our measurement system is a Jenoptik AG (Jena, Germany) ProgRes® MFscan with a pixel size of 6.45μm×6.45μm. This means that the maximum angle between the reference wave and the object wave that will allow us to resolve the interference pattern in the hologram is less than 2.81°. For off-axis holography, more than half of the full resolution of the hologram is occupied by the twin image and zero-order term, which means that, practically, the angle must be less than half of this 2.81°. Hence the maximum object size that can be measured using these techniques is currently limited so that much of the research in this area is carried out in digital holographic microscopy. The sampling theorem has to be carefully maintained, or the contrast of the whole hologram decreases or even vanishes in extreme cases. Figure 2 shows the effect of different pixel sizes for CCD sensors used in digital holography. Only the resolution of the CCD was changed in these images to vary pixel sizes, whereas all other conditions remained unchanged.
The die in Figure 2(a) is shown clearly, but the image shown in Figure 2(b) is blurred and incomplete, caused by aliasing of insufficient samples. With a further reduction in CCD resolution the fringe pattern in the hologram cannot be resolved by the sensor so the image disappears in Figure 2(c).
It is possible to enlarge the maximum resolvable angle either by using a laser source with a long wavelength or by using a CCD camera with a very small inter-pixel distance. In practice, most long-wavelength laser sources are diode lasers, which have a large divergence angle and poor coherence.
Wavelengths above 750nm are invisible to human eyes; normal CCD cameras have insufficient quantum efficiency in this band. Therefore, most researchers use light sources such as helium-neon (HeNe) lasers, argon ion lasers and high-powered solid-state pulsed lasers. It is straightforward to achieve better spatial frequencies by reducing the pixel size of the CCD sensor. However, the light intensity falling on a single pixel drops when the pixel size decreases. This generates shot noise that severely degrades the image quality. There obviously is a trade-off in the amount that pixel size may be reduced, while remaining free of the degrading effects caused by shot noise. Current image sensor technology has almost reached this level of 40µ2 (Park et al., 2003). Since both of these direct methods for increasing object size are impractical, other methods have to be found to measure objects of larger sizes over reasonable recording distances.
Recently, a signal-processing based approach known as super-resolution (SR) image reconstruction has proven useful for overcoming the restrictions involved in obtaining higher resolution images. The term SR, originally used in optics, refers to algorithms that operate mainly on a single image to extrapolate the spectrum of an object beyond the diffraction limit, i.e. SR restoration (Kang and Chaudhuri, 2003). The synthetic aperture method is a type of SR restoration. These two concepts, SR image reconstruction and SR restoration, have a common focus in recovering high-frequency information that has been lost or degraded during image acquisition. However, the cause of this loss of high-frequency information differs between these two concepts. SR restoration in optics attempts to recover information beyond the diffraction cut-off frequency, while the SR image reconstruction method used in engineering tries to recover high-frequency components that have been corrupted by aliasing. We are referring to SR image reconstruction.
The SR image is produced from multiple low-resolution (LR) images. The basis for increasing the spatial resolution in SR techniques is to capture multiple LR images from the same scene. This can be achieved by several acquisitions from one camera, or from multiple cameras installed in different positions. Each LR image represents a different "view" of the same scene. These LR images are sub-sampled and shifted with sub-pixel precision. It is not useful to shift them by integer units of pixels because that would cause each image to contain the same information, thus there would be no new information to reconstruct an SR image. By using different sub-pixel shifts for multiple captures, new information is obtained. In this case, the new information in each LR image can be exploited to obtain an SR image.
Most of the SR image reconstruction methods consist of three stages: registration, interpolation, and restoration (Park et al, 2003). These steps can be implemented separately or simultaneously, according to the reconstruction methods adopted. Figure 3 displays the basic premise for SR methods in four-scan mode.
In our experiments in the General Engineering Research Institute at Liverpool John Moores University, a ProgRes® MFscan CCD camera with a "micro-scanning" function—sometimes referred to as "pixel shifting"—was used to capture SR digital holograms. The image sensor of such a camera can be moved over sub-pixel distances by means of piezoelectric actuators. Using this technique the image plane is "scanned" by the CCD array sensor in a similar manner to a line scanner. The successively captured single images are recombined to produce an image with greatly increased spatial resolution.
The problem of aliasing can be overcome by applying the SR technique. Figure 5(a) shows an image that is exhibiting severe aliasing because the angle between the die and the reference wave is larger than the CCD camera can resolve. The display of the die is incomplete and misplaced because theoretically the image of the object should be on the left-hand side of the central zero-order image, which it clearly is not. Now compare the poor quality of Figure 5(a) with the greatly improved reconstruction using SR holography shown in Figure 5(b). The contrast of the image is much better and the aliasing vanishes. This image has been reconstructed from a hologram taken in micro-scanning mode. By shifting the CCD chip three times, by a distance of half the pixel size in the x and y axes, four holograms with resolutions of 1024×1024 pixels are recorded and then combined to produce a super-resolution hologram with a resolution of 2048×2048 pixels. This verifies the ability of the SR image reconstruction technique to increase the resolution of holograms and to eliminate the aliasing effects caused by under-sampling. Therefore, by using SR methods larger objects can be recorded and the distance between the object and the hologram can be shortened without resorting to the use of optical lenses.
It is natural to use interpolation to increase the size of the hologram. However, single-image interpolation is unable to recover high-frequency components that have been lost or damaged during the LR sampling process. This situation is shown in Figure 6. A hologram of the same die is recorded at a distance of 629mm using the same optical system that was used to obtain Figure 5. It has a resolution of 1024×1024 pixels. The reconstructed image is displayed in Figure 6(a), where the three terms are well separated. Then the original hologram is down-sampled with a new resolution of 512×512 pixels. Every other pixel in x and y dimensions is preserved to form the new down-sampled hologram. Then a bilinear interpolation is carried out on this down-sampled hologram to create a new hologram in which the resolution is once again restored to the original 1024×1024 pixels. The reconstructed result from this interpolated hologram is shown in Figure 6(b). Comparing Figure 6(b) with (a), it can be seen that the high-frequency information is not restored by bilinear interpolation, so the image suffers from serious aliasing effects. Next, in a similar manner the original hologram with its resolution of 1024×1024 pixels is bilinearly interpolated to create a new hologram, now with a resolution of 2048×2048 pixels. Making a comparison of its reconstructed result, which is shown in Figure 6(c), and the result shown in Figure 6(d) which is from an SR hologram acquired in four-scan mode, it can be seen that the SR results shown in Figure 6(d) provide better noise reduction and give a correct result. However, in Figure 6(c) an undesired replica of both the twin image and the image of the die can be seen. Therefore we can conclude that interpolation cannot restore the high-frequency information lost or damaged by LR sampling, but that SR algorithms can achieve this.
An example of using SR holograms to measure the surface of a cylindrical roller is shown in Figure 7. (To see the images referred to, please visit www.AdvancedImagingPro.com) The phase-difference images were obtained by a two-source contouring method. Two holograms of a cylindrical roller with different illumination angles were captured. After numerical reconstruction, one phase image was subtracted from the other to generate the phase-difference image shown in Figure 7(b). This process was repeated by recording holograms in microscanning mode and another phase-difference image was obtained as shown in Figure 7(a). The results shown in Figure 7(a), (c), (e) and (g) were calculated from SR holograms with a resolution of 2048×2048 pixels, captured in four-scan mode. Figure 7(b), (d), (f) and (h) were calculated from LR holograms with a resolution of 1024×1024 pixels, captured in normal mode. It is very clear that a larger surface area of the roller is measured by the SR holograms. The 3D surface in Figure 7(g) exhibits less noise than that shown in Figure 7(h).
Yan Li received her BSc degree in Electronics Engineering from Southeast University, China, in 1999, and her MSc degree in Optical Engineering from Zhejiang University, in 2003. She joined the General Engineering Research Institute at Liverpool John Moores University (Liverpool, U.K.) in 2004 to work for a PhD in Digital Holography. Her research interests also include 3D surface measurement, digital image processing and fiber transmission. Prof. Michael Lalor is co-director of General Engineering Research Institute (GERI), Liverpool John Moores University. Prof. David Burton is Director of GERI. The authors wish to thank Dr. Francis Lilley for his assistance.
Gabor, D. (1948) A New Microscopic Principle. Nature, 161, 777-118.
Kang, M.G. & Chaudhuri, S. (2003) Super-resolution Image Reconstruction. IEEE Signal Processing Magazine, 20, 19-20.
Park, S.C., Park, M.K. & Kang, M.G. (2003) Super-resolution Image Reconstruction: A Technical Overview. IEEE Signal Processing Magazine, 20, 21-36.