How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
"CCD always has been a custom fabrication process," Janesick says. "CMOS, for a good five or six years, has been advertised as a standard commercial process. It's cheaper. There's nothing custom. It didn't go very far in terms of raw performance. CCD is textbook quality, a perfect device. That's why you can see the edge of the universe with it. That's the standard.
"The CMOS people said, 'silicon is silicon. It can do the same things.' That's far from the truth. CMOS is lacking in raw performance—the work we're doing is to get it up to CCD standards. There are fundamental differences in architecture and operations."
The choice really is application dependent, ranging from cell phones to the Hubble Telescope and hundreds of applications in between.
"You look at the application, the economies and the performance," Janesick says. "Currently you would use CMOS for the camera phone because the detector has to be integrated, you need low power and the quality doesn't have to be that good. It's fundamentally difficult to take good pictures. The lenses can't do it. You need a very high-quality lens to work with 1.5 micron pixel cameras.
"The cell phone camera is based on economics, the Hubble costs millions of dollars to produce a camera system. You go with CCD because it's a textbook sensor. CMOS can perform the same way but it takes a lot of custom work. We start with a large pixel of 8 microns or larger. Then you have to 'stitch' the device. When you fabricate the sensor you have a limited field of view in the lithography to make it, so you stitch several fields together to form one view. You also need high-end, thin silicon. It's blind to IR because you only care about visible. In science, we go to very thick silicon, epithelial, silicon. It's expensive. And you want the device to be thin. That increases the cost. You may wind up quadrupling the cost."