How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
Q & A with Len Yencharis
by Len Yencharis
Len Yencharis: Can you explain the benefits of using FPGAs for embedded military vision applications? Processing image data at high frame rates, converting and mapping the data and performing image segmentation were previously handled by dedicated, proprietary processors, so what has changed the thinking of the military in terms of ROI (Return On Investment), COTs and reliability?
David L. McCubbrey, Chief Technical Officer, Pixel Velocity, Inc.: FPGAs are absolutely ideal for embedded military vision, because these applications are very demanding, but have severe space, weight and power constraints. What's changed is that the processing capacity of FPGAs is now on a par with ASICs.That's a tremendous opportunity because the cost/risk of ASIC development is eliminated with FPGAs, and in many cases they can be treated as COTs items by buying them on a card from a variety of vendors. The only thing really holding back vision applications has been the difficulty in translating algorithms into hardware, and the difficulty of updating those algorithms once they are in hardware.
LY: I see tremendous potential for FPGAs beyond military applications. Can you expound on such applications as medical, automotive collision avoidance and commercial video, and when we might see some real products being developed?
DM: The medical area is a great fit for this technology, because many medical imaging techniques have extremely high processing requirements. What FPGAs will allow is smaller, faster and less expensive versions of existing devices being developed. In addition, a lot of new applications will become possible for the first time, because FPGAs can give speed-ups of two and three orders of magnitude over PCs, at a reasonable price.For instance, we have a grant from the NIH to develop an FPGA-based real-time ultrasound computational architecture for strain-rate imaging. That work will allow some great academic research done at the University of Michigan to take a critical step forward toward practicality.
A lot of work has been going on in automotive applications for some time. Automotive vision applications that enhance driver situational awareness will probably be the first to arrive, because those require the least amount of processing. But later on, (6 years out?) applications like lane-change assistants, backup obstacle warning and forward collision warning will come.