How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
By Lee J. Nelson
A Multitude of Applications
Recent interest in FACS centers on applications in social, scientific, legal, and entertainment venues. According to Dr. Bartlett, accurate reading of expressions can help assess a driver’s degree of fatigue and distinguish real from faked pain. In the entertainment arena, we seem preoccupied with deception and claims of extra-sensory abilities. Again, the face can offer insight. As published in the September, 2008, edition of Advanced Imaging; helping to parse truth from lie, we continue to explore alternatives to the enduring polygraph. FACS also backs that objective.
Previous approaches to detecting driver drowsiness made advance assumptions about relevant behavior, focusing on blink rate, eye closure, and yawning. Bartlett and her collaborators at the Faculty of Engineering and Natural Sciences, Sabanci University (Orhanli, Istanbul, Turkey) employed machine learning methods to discover facial configurations that might foretell fatigue. CERT, the automated facial expression analysis system based on FACS, was retrained on a larger dataset of impulsive and posed examples. In addition to the outputs from CERT, experimenters acquired head motion and steering wheel positional data. The facial action outputs were passed to a classifier which ascertained the onset of fatigue. Moreover, the study revealed new facial behaviors that occur during different stages of drowsiness.
Discriminating authentic from phony pain (malingering) is vital in clinical medicine. Absent substantial preparation, doctors score poorly at gauging pain, exclusively from a patient’s appearance. Although investigators at Saint Josephs Hospital’s Arthritis Institute (London, Ontario, Canada) show that faces portray helpful information, most physicians do not know what data to collect or how to interpret them. Automated measurements prove consistent with evaluations by human experts. At UCSD’s Machine Perception Laboratory, a two-stage classification program partitions true and fake pain in a subject-independent manner. The ultimate goal is not to spot malingering, per se; rather, to validate automated systems’ recognition of facial behavior (that could be missed by an untrained eye), and to further the field of facial neurology.
The apparent capability to read minds (viz. CBS television’s The Mentalist, Fox Broadcasting’s Lie to Me, and Medium on NBC), by some accounts, can be explained through exquisite sensitivity to microexpressions, those fleeting and barely perceptible facial manifestations that last fractions of a second. Dr. Mark Frank (Center for Unified Biometrics and Sensors, The State University of New York, Buffalo) has built on the research of his mentor, Paul Ekman. To zero in on lies, Frank examines emotional cues: frowns, furrows, smirks, tics, and displacement actions. And he teaches others—judges, interrogators, law enforcement agents—how to look for and decode microexpressions.
To a certain extent, Frank is codifying human intuition while debunking myths about how to read people. “The literature shows that liars don’t make less eye contact than truth-tellers. But you ask anyone on the planet what liars do, the first thing they agree on is liars don’t look you in the eye,” he affirms. “Even just getting over that mythology is a step in the right direction.”