How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
With advances in tracking technologies, new and challenging navigational applications have emerged. The availability of mobile devices with global positioning system receivers has stimulated the growth of location-based services, or LBS, such as location-sensitive directories, m-commerce and mapping applications.
Today, standard GPS devices are becoming more and more affordable, and this promises to be the lead technology in use in LBS. Commercial applications such as in-car navigation systems are already established. However, for other potential applications like pedestrian urban navigation standard GPS devices are still deficient in providing high accuracy (ranges between 10 to 50 meters); coverage in urban areas (in-between high buildings or inside tunnels); and coverage in indoor environments.
An assisted GPS application has been proposed that features orientation assistance provided by computer-vision techniques — detecting features included in the navigation route. These could be either user-predefined fiducials or a careful selection of real-world features (i.e. parts of buildings or whole buildings).
With the combination of position and orientation it is possible to design “augmented reality” interfaces, which offer a richer cognitive experience, and which deliver orientation information infinitely and without the limitations of maps. However, to complement the environment in an AR setup, the continuous calculation of position and orientation information in real time is necessary. The key problem for vision-based AR is the difficulty obtaining sufficiently accurate position and orientation information in real time, which is crucial for stable registration between the real and virtual objects.
The Development of Location Context tools for UMTS Mobile Information Services research project, or LOCUS, aims to significantly enhance the current map-based user-interface paradigm on a mobile device through the use of virtual reality and augmented reality techniques. Based on the principles of both AR and VR, a prototype mixed reality interface has been designed to be superimposed on location aware 3D models, 3D sound, images and textual information in both indoor and outdoor environments. As a case study, the campus of City University (London) has been modeled and preliminary tests of the system were performed using an outdoor navigation within the campus.