How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
Parallel computing operates on the principle that large problems can almost always be divided into smaller ones, which may be carried out concurrently. It has been used for many years, mainly in high performance computing, but interest in it has become greater in recent years. In fact, it has recently become the dominant paradigm in computer architecture, mainly in the form of multicore processors. Parallel computer programs are harder to write than sequential ones, because concurrency introduces several new classes of potential software bugs. Communication and synchronization between the different subtasks is typically one of the greatest barriers to getting good parallel program performance.
"This whole parallel thing for a lot of developers is new," Schneider explained. "Before, I would write code and expect everything to write in sequence. Now, people have to think, Core 1 will work here; Core 2 will do different work here at the same time. Then they have to talk to each other. If Core 2 finishes before Core 1, that's bad.
"So, what do I do now that I have four cores? The problem is in the application of the algorithm I have to make it parallel. Forget the hardware. That's a theoretical math design problem. We solved it for GPUs. It's a scarce skill and not that easy to pick up. There's a very narrow bracket of people who understand parallel programming."
It's not all GPU, Schneider says. "It's a heterogeneous system. We use both to get the performance we do. Some in the market say the GPU will take over from the CPU, but we think they're complimentary. The CPU runs the operating system and e-mail and we use the GPU for math and number crunching."
It's sort of a form of supercomputer for the common man. Companies no longer have to buy clusters. They can get performance without the expense. They use less power, there's no need for air conditioning and they take up less space.