Smart Sensor Fusion

Digital Smart Multi Sensor Fusion is a technique of intelligently integrating coherent spatial and temporal information, from sensors operating in different spectrum of wavelength, into a compact form. Fusion can increase informational content to the viewer when images from different sensors working in different wavelength spectrum are combined seamlessly together.

Fusion process seamlessly integrates complimentary information from different sensors and generates an image that is rich in content.

Fusion finds in application in increasing situational awareness, target cueing and is widely used in military applications. Fusion systems have been developed that combine low light level imaging with thermal imaging. In such systems low light level imaging can be utilized for SA and thermal imaging can be utilized for target cueing.

Low light sensors are typically low lux CMOS/CCDs, Electron Multiplying CCDs (EMCCD), Image Intensifiers. These are generally depict the scene as we humans perceive. Thermal imagers cooled or uncooled show the targets as black or white, depending upon the object temperature. Infrared thermal imaging is less attenuated by smoke and dust and a drawback is that they do not have sufficient resolution and sensitivity to provide acceptable imagery of a scene.

A digitally fused system utilizes intensity to distinguish situational awareness information and a thermal sensor is apt for threat detection. Fusion of both provides best possible picture to the user for visualization or to the computer for ATR (Automatic Target Recognition) applications. For applications such as ATR the fusion of the multi spectral information into a single image stream provide enhanced capability, accuracy, and possible reduced processing requirement.

It is important to fuse the images using sophisticated algorithms that can pick best features from different sensors and fuse them seamlessly and intelligently. An average algorithm can affect the system performance by lowering the contrast and quality of the images as compared to if individual sensors were used.

Pixel level fusion picks the best features from both the worlds and presents best possible image to the user or to the computer. Most COTS algorithms use simple averaging, blending, and overlaying of different sensors that result in poor performance in critical situations.

Fusion systems are now a common part of soldier equipment (Weapon sights, hand held sights, helmet mounted night vision), driver vision system for vehicles, long range sea/border surveillance, fire control, missile guidance systems, unmanned aerial vehicles etc.

Soldier Equipment – Traditionally a soldier was equipped with low light imaging sensors or Thermal imagers in the form of weapon sights, hand held sights or helmet mounted for his enhanced vision. Image Intensifiers or any low light sensors though produce images natural to humans but still have limitations. These aren’t the best sensors for threat detection for example a target camouflaged can still sneak through undetected through such sensors. Thermal imagers are the best sensors in such situations but images generated by a thermal imager are generally not very informative and soldier has to do a lot of correlation to make sense out of them.

Fusion solves the problem by bringing both the sensors together and it produces an image that brings contextual information into a thermal image. Such a fused image is a much more perceivable form of imagery. It makes it easy to distinguish between a friend and a foe.

Driver Night Vision Systems — The most basic requirement of a driver assistive system is that the images produced should be very easy to comprehend, as the driver only has very short time to react. A thermal imager can produce really crisp images of the road at night time but that alone is not sufficient as it makes it difficult for the driver to correlate with the scene that he can see himself. A low light sensor can produce images easy on the eyes but the performance can vary depending upon the amount of light visible, though most low light sensors are good enough to work under star lit conditions. However a Fusion system can achieve both and adjusts automatically to the illumination conditions, thus proving to be the best technology for such applications.

Surface Land-mine detection – Thermal imagers have the ability to pick up the difference in temperature profile influenced by the presence of the buried mine. Similarly a multitude of sensors like GPR (Ground Penetrating Radar) and even visible cameras are used to pick up the change in the surfaces physical or thermal. Mine detection algorithms tend to use information from different sensors to reduce the false alarm rate and have best detection rate. Fusion is the most critical component which first fuses information from these sensors and makes it easier for the algorithms to auto detects the presence of foreign objects. It has been observed that probability of detection increases by almost 96% if the algorithms operate on fused imagery instead of decision using level fusion.

Long Range Surveillance — Day and Night Border surveillance and Sea & Air Port surveillance requires the system to work day and night and work even under poor weather and visibility conditions. Traditionally CCDs and cooled thermal Imagers have been used in conjunction and it’s left to the user to detect threats. In surveillance the end user is generally already working at maximum sensory input capacity and correlating the multi sensor data from various sensors makes his job even more difficult. The fusion methods try to provide him best possible view integrated from different sensors, helping in comprehend the scene much more effectively and successfully detect threats. It increases his situational awareness many folds by providing him a single image instead of him manually correlating from a montage of views.

Missile Guidance Systems – Modern seekers use thermal imagers because of its better threat detection ability and the sophisticated ATR algorithms running on the onboard computer inside the missile uses these images to detect target features to keep the missile locked and increases hit ratio. The ATR algorithms though work best if good features on the target are visible. A thermal imager can pick up the presence of the target but does not pick up features as well as a camera operating in visible spectrum of wavelengths (CCD/CMOS). Fusion again plays a major role by combining information from these two sensors in the best possible form. It not only allows the ATR algorithms to pick up the target but also reduces false detection by using features present in the fused image.

Unmanned Aerial Vehicles – UAVs have started to find applications in civil and military space. For example Military UAVs fusion increases the threat detection ability by bringing out camouflaged objects on ground.