How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
The National University of Mexico (UNAM) has established the Observatory of Visualization, referred to as IXTLI, an Aztec word that means face and eye. The innovative IXTLI facility allows professors and researchers to study real or abstract objects, scientific phenomena, theoretical concepts, and complex models in a three dimensional immersive virtual reality environment. IXTLI is used to conduct scientific research and instruction in multiple disciplines, including archeology, medicine, molecular chemistry, geography, biochemistry, architecture, topology, psychology and microbiology.
According to Dr. Genevieve Lucet, Director of Computing Facilities for Research at UNAM, “The goal is to maximize the educational experience for students and provide a powerful investigative tool to researchers. The IXTLI’s use of state-of-the-art immersive visualization technology is unique in Latin America. This room’s advanced display technology allows participants to visualize and simulate complex objects and images in 3D with real-time image control and manipulation.”
UNAM contracted Fakespace Systems (Marshalltown, Iowa), a pioneer in the development of immersive visualization and virtual reality, to design and install this display system. Malcolm Green, Senior Account Executive for Fakespace Systems, said, “The centerpiece is a massive 30-foot-long-by-8.3-foot-high 140-degree curved, contiguous screen.” The screen serves as a “canvas” on which multiple monoscopic image windows can be presented simultaneously.
SuperView display processors from RGB Spectrum (Alameda, Calif.) were selected to provide high-performance image integration to display multiple visuals as they are generated from different sources. It allows manipulation of all on-screen windows, creating dynamic user interaction. The display also is designed for immersive virtual reality. The system’s projectors are capable of presenting stereoscopic, computer generated images with three-dimensional depth perception.
“The visuals are dynamic, allowing instructors and researchers to naturally interact with and manipulate images in real time,” explained Dr. Lucet. “The individual is outfitted with a movement tracking system composed of a wireless glove with finger sensors, head sensors, and a sensor that analyzes motion of a wand or three-dimensional mouse device. As the individual moves his or her body and head, the image generation system regenerates the visuals to match their position and viewing perspective as if they were moving in the real world. Stereoscopic, three-dimensional depth sensation is generated by projecting distinct images for each eye and alternating these at high speed. The audience views the imagery through electronic glasses that shutter open and closed to match the alternating views presented by the projector. The shuttering is imperceptible to the user and creates the two-perspective view that is necessary to create the three-dimensional experience.”