How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
As computer chips continue to advance, another change is the rendering of 3D images because techniques have been enabled by the latest generation of graphics-generating hardware.
Rendering converts a model into an image either by simulating light transport to get photorealistic images, or by applying some kind of style as in non-photorealistic (an area of computer graphics that focuses on enabling a wide variety of expressive styles for digital art. In contrast to traditional computer graphics, which has focused on photorealism, NPR is inspired by artistic styles such as painting, drawing, technical illustration, and animated cartoons.) The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step is usually performed using 3D computer graphics software or a 3D graphics application programming interface (API). The process of altering the scene into a suitable form for rendering also involves 3D projection which allows a three-dimensional image to be viewed in two dimensions.
The gaming industry drove CPUs. Now the oil and gas industry gets the equivalent performance at the desktop, making the most up-to-date rendering accessible to almost all users. The latest generation of GPU chips has more parallel computing units, which provides an automatic increase in rendering performance. The ability to program the GPU results in higher quality rendering and, by combining computing and rendering on the GPU, there are new opportunities to do this better and more efficiently.
Middleware libraries, such as Mercury's VolumeViz, implement many of these techniques and provide a framework for applications to implement their own. Some relatively new techniques include bump mapping (a technique where, at each pixel, a perturbation to the surface of the object being rendered is looked up in a heightmap and applied before the illumination calculation is done. The result is a richer, more detailed surface representation that more closely resembles the details inherent in the natural world), dynamic lighting, arbitrarily shaped probes (mapping seismic data onto arbitrary geometry) and co-blending of multiple data sets. By combining computing and rendering in the GPU, techniques including volume clipping, volume masking and volume warping.
"3D visualization is and will continue to be a critical part of addressing challenges in exploration and production," Mercury's Heck says. "3D visualization must integrate solutions for data management, computing and rendering. Advances in both hardware and software are coming together to enable larger data sets, more automated analysis and more effective presentation of the data on single workstations. Taking advantage of these advances will be challenging for software developers and will require some re-thinking of application architectures and user interfaces."