How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
For decades, machine vision and automation have combined to yield faster, more efficient operation in industrial applications. Automation aids vision, as when robot arms present products to vision systems for inspection. Machine vision assists motion control, for example, when electronic imagers search for fiducials in order to stop conveyors at the proper moment. The two can be tightly coupled, as for microelectronics assembly. But it’s the next level of synchronization—visual servoing—that’s making headlines today.
Executed primarily with the aid of sophisticated algorithms, visual servoing allows a robotic arm to tightly synchronize with even a moving target, based on real-time feedback from the vision system. The approach would enable robot arms to install tires on the moving chassis of an assembly line, for example, or to grab swaying carcasses in a slaughter house. The concept is simple enough but what is easy and intuitive for biological systems has always been more complex for electromechanical systems. Now, researchers and engineers are making significant inroads toward solving the problem.
WHEN WORLDS COLLIDE
The goal of visual servoing is bringing two elements together, like a robotic end effector and a chip or a wheel gripped by a robotic arm and the axel studs of an automobile chassis. The system must recognize the part, establish a coordinate system for the part, establish the location of the robot end effector in that coordinate space, and bring it to the part. The problem is simple enough in concept, but it wasn’t until the industry began putting the emphasis on processing and intelligence that visual servoing began to come to fruition.
As its name suggests, the technique uses machine vision as the feedback source. Configurations vary. Multiple cameras can be used: one to see “see” the object and another to “see” the target. If a single camera is used, as in microelectronics assembly, the machine vision system has to be able to see both end effector and target in the same point of view. The next step is to determine the offset between the end effector and the target in three translational dimensions and three rotational dimensions, then calculate the movement required to bring that offset to zero. It is a continuous process of feedback and response, the say way a human eye tracks a moving ball.
Visual servoing is based on where the system components actually are versus where the hardware thinks they are which means that even low-end hardware can achieve extremely precise alignment. A system can reach 20 or even 10µ accuracy without the need for multiple cameras, linear motors, and high-end absolute encoders (see figure 1). “As long as we can see the target and end effector in our field of view, we can drive the two together,” says Brian Powell, vice president of sales and operations at Precise Automation (San Jose, Calif.). “It doesn’t matter if it has heated, cooled or isn’t perfectly aligned because we’re driving our motion to the actual alignment itself, we’re not depending on some external motion-sensing device that is a proxy for the actual alignment.”