How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
Whether designing a new car, pushing the boundaries of science or ensuring the safety of our cities, managing massive data sets involved in leading-edge research and design continues to challenge users' productivity and prevent breakthrough insights. Not only must users contend with an overwhelming volume of data, they are expected to complete their work in less time. As the amount of data that can be collected grows, fully understanding that data has become the bottleneck in all manner of problems and workflows.
This extreme growth data, to terabyte levels and beyond, is no surprise to industry experts who have long seen that the quest for increasing accuracy and detail would drive data sets to grow even faster than Moore's Law. This drive comes in three forms:
Greater precision often causes a twofold to a tenfold increase in data sizes. In almost every field, data is being produced with more detail, rather like watching TV in high definition instead of standard definition. For example, MRI scanners now provide details to the one-millimeter rather than the 10-millimeter level. X-rays are set to go from 2-D to 3-D. Car crash studies model every surface down to the millimeter rather than the large, simple blocks used 15 years ago.
Adding more richness to the information, which can cause twofold to twentyfold data increases, is like the difference between watching TV in color rather than black and white. Industry examples are adding multiple geo-physical attributes for seismic interpretation, or looking at more complex physical properties in a car crash rather than just surface stress.
Modeling more completely can cause tenfold to one hundredfold increases in complexity. It is like thinking not just about how fast a car goes, but also how quiet it is and how it handles; or not just where the oil is, but modeling the way the oil flows to assess the optimal way to squeeze it out of the ground.