Advanced Imaging


Advanced Imaging Magazine

Updated: January 12th, 2011 09:49 AM CDT

Architecture for Airborne Applications

Processors for Airborne Intelligence, Reconnaissance, and Surveillance
Images courtesy SRC Computers
Figure 1: In the Implicit+Explicit Architecture, Dense Logic Devices (DLDs) encompass a family of components that includes microprocessors, digital signal processors, as well as some ASICs. These processing elements are all implicitly controlled and typically are made up of fixed logic that is not altered by the user.
Figure 2: Systems can be built with a single MAP processor and microprocessor combination, or when more flexibility is desired, Multi-Ported Common Memory accommodating up to three MAP processors and Hi-Bar switches accommodating thousands of MAP processors can be employed.
Figure 3: SRC servers that use the Hi-Bar crossbar switch interconnect can incorporate common memory nodes in addition to microprocessor and MAP nodes. Each of these common memory nodes contains an intelligent DMA controller and up to 16 GBs of DDR-2 SDRAM.
Figure 4: The MAP processor used in this system was the most powerful SRC-6 MAP processor ever produced. It was coupled to an Intel Pentium microprocessor and used a Fedora Linux operating system.
Figure 5: The second airborne system in production is a 10-module system designed for payload bay 3 of the General Atomics Sky Warrior, but is also usable in other larger manned and unmanned platforms. It contains a dual Xeon motherboard, a Hi-Bar switch, 750 Gbytes of removable encrypted storage, 28 VDC power system, thermal solution and a mixture of up to 10 MAP processors or common memory modules.
Figure 6: This system is being designed to withstand an operating range from –50C to +50C, an altitude limit in excess of 25,000 feet. and will meet shock and vibration requirements for single engine aircraft weighing less than 12,500 pounds.
Figure 7: A grayscale pixel’s intensity is simply the pixel’s eight-bit numeric value, but the intensity information is distributed among the individual RGB values for a color pixel. To obtain the intensity value for an RGB pixel, each 24 bit RGB value is transformed from the RGB color space to the Hue-Saturation-Intensity (HSI) color space. The intensity values for all pixels in both frames are then histogrammed. From these two intensity histograms, a statistical Cumulative Distribution Function (CDF) is created and then normalized for each frame. A mapping function is created from these two normalized CDF arrays to map the original color pixel intensity values to a new intensity value such that the new intensity value distribution matches the GS pixel intensity value distribution. The original intensity values are re-mapped and the new HSI image is transformed back into the RGB color space.
Figure 8: The MAP processor’s GCM Bank 0 acts as a frame buffer for the RGB image and GCM Bank 1 acts as a frame buffer for the GS image. In stage 0, two RGB and six GS pixel intensities are histogrammed in parallel every clock. The integer RGB intensity calculation is part of the RGB histogramming pipeline. After all pixel intensities for both frames are histogrammed, stage 1 calculates the CDF arrays for both histograms for all histogram bins in parallel. Stage 2 normalizes both CDF arrays in parallel, a single precision floating point (SPFP) calculation. Stage 3 uses both normalized CDF arrays to generate the histogram matching MAP array. Finally, stage 4 re-reads the RGB image data two RGB pixels per clock from GCM Bank 0 and calculates the HSI pixel values. The two integer intensity values select two new intensity values from the Map array (generated in stage 3). The two new intensity values are cast to SPFP, and together with the two SPFP pixel hue and saturation values, are converted back to the 24 bpp RGB color space and stored in GCM Bank 1.
Figure 9: The CPU normalized cross-correlation application is a single threaded, serial implementation of the algorithm shown in Figure 8.

By David B. Pointer
SRC Computers

Code intended to be compiled for the MAP processor is written as separate subprograms and collected into a single logic block by the Carte Programming Environment through the code in-lining process. The subprograms are called from routines executing on the microprocessor, as well as from other MAP routines. This technique permits hierarchical code development and code reuse. It also permits expressing greater parallelism within code blocks and permits optimizations across subprogram boundaries. The Carte Programming Environment then provides support for integrating the microprocessor routines, MAP routines and libraries into a single Unified Executable that contains the microprocessor instructions, MAP logic and all required library routines to manage the control of the program.

Debugging Tools

The Carte Programming Environment fully supports co-development of software and hardware functional units through the use of debugger and hardware simulation tools. With the Carte Programming Environment managing both the microprocessor code as well as the simulation code, applications execute seamlessly in hardware or simulation environments. The Carte Programming Environment creates program executables that support source-level debugging through GNU’s gdb debugger, the Intel debugger, or any microprocessor-oriented code development tools. In debug mode, a program is executed entirely in the microprocessor. The routines coded for MAP processors are source line unchanged but retargeted for the microprocessor. This mode of development permits highly productive algorithm development and debugging. A MAP routine that is developed using debug mode will execute correctly when retargeted to the MAP processor. Debug mode can be used on any standard PC containing the Carte Environment.


SAR Backprojection

The Spotlight Synthetic Aperture Radar (SAR) Backprojection algorithm is considered to be the “gold standard” of the SAR imaging techniques. This section describes a study that compared the performance of a MATLAB implementation of a 2D SAR Backprojection application to a MATLAB – MAP routine implementation and to an all C Language implementation. The compute intensive MATLAB routines were converted into C Language and compiled using the Carte Programming Environment, which targets the SRC IMPLICIT+EXPLICIT Architecture with its MAP reconfigurable hardware.

A spotlight SAR image is a two (or three) dimensional mapping of received radar energy. A SAR sensor illuminates a target area with a series of linear frequency modulation pulses. The location of an individual scatterer is determined by measuring the range and doppler (range rate) and comparing this to a central reference point, called the motion compensation point. As more pulses are used, the azimuth, or cross-range, resolution increases.

There are several algorithms that have been developed to form spotlight SAR images. In deciding which algorithm to use, there is a tradeoff between computational efficiency and imaging accuracy. For instance, the simplest algorithm is to order the pulses into a rectangular array and to perform a two-dimensional Fourier transform. However, the resultant image will not be very accurate, as the algorithm does not compensate for scatterer motion through the synthetic aperture. The most accurate image formation algorithm is the tomographic backprojection. This backprojection algorithm calculates an exact solution for every pixel in the image, but has very high computational cost. There have been numerous algorithms developed that have acceptable accuracy with much less computational time than the backprojection algorithm. The most popular of these is the polar format algorithm.

Subscribe to our RSS Feeds