Sensor Fusion

Realtime Pixel Level Sensor Fusion

HomeTechnologySensor Fusion

Description

Digital Sensor fusion is a technique of intelligently integrating coherent spatial and temporal information, from sensors operating in different spectrum of wavelength, into a compact form. Fusion can increase informational content to the viewer when images from different sensors working in different wavelength spectrum are combined seamlessly together.

Fusion process seamlessly integrates complimentary information from different sensors and generates an image that is rich in content.

Fusion finds in application in increasing situational awareness (SA), target cueing and is widely used in military applications. Fusion systems have been developed that combine low light level imaging with thermal imaging. In such systems low light level imaging can be utilized for SA and thermal imaging can be utilized for target cueing.

Low light sensors are typically low lux CMOS/CCDs, Electron Multiplying CCDs (EMCCD), Image Intensifiers. These are generally depict the scene as we humans perceive. Thermal imagers cooled or uncooled show the targets as black or white, depending upon the object temperature. Infrared thermal imaging is less attenuated by smoke and dust and a drawback is that they do not have sufficient resolution and sensitivity to provide acceptable imagery of a scene.

A digitally fused system utilizes intensity to distinguish situational awareness information and a thermal sensor is apt for threat detection. Fusion of both provides best possible picture to the user for visualization or to the computer for ATR (Automatic Target Recognition) applications. For applications such as ATR the fusion of the multi spectral information into a single image stream provide enhanced capability, accuracy, and possible reduced processing requirement.

It is important to fuse the images using sophisticated algorithms that can pick best features from different sensors and fuse them seamlessly and intelligently. An average algorithm can affect the system performance by lowering the contrast and quality of the images as compared to if individual sensors were used.

Pixel level fusion picks the best features from both the worlds and presents best possible image to the user or to the computer. Most COTS algorithms use simple averaging, blending, and overlaying of different sensors that result in poor performance in critical situations.

Fusion systems are now becoming an important part of soldier equipment (Weapon sights, hand held sights, helmet mounted night vision), driver vision system for vehicles, long range sea/border surveillance, fire control, missile guidance systems, unmanned aerial vehicles etc.

 

Digital Fusion for Dismounted Soldier

Traditionally a soldier was equipped with low light imaging sensors or Thermal imagers in the form of weapon sights, hand held sights or helmet mounted for his enhanced vision. Image Intensifiers or any low light sensors though produce images natural to humans but still have limitations. These aren’t the best sensors for threat detection for example a target camouflaged can still sneak through undetected through such sensors. Thermal imagers are the best sensors in such situations but images generated by a thermal imager are generally not very informative and soldier has to do a lot of correlation to make sense out of them.

Fusion solves the problem by bringing both the sensors together and it produces an image that brings contextual information into a thermal image. Such a fused image is a much more perceivable form of imagery. It makes it easy to distinguish between a friend and a foe.

Digital smart fusion is a leap in technology is not yet available on any hand held or portable electro optical equipment. Most companies around the world use optical fusion to increase situational awareness. Optical fusion affects the performance of thermal imager by reducing its contrast and also can’t be exported or transmitted. Though a lot of companies have been thriving to upgrade from optical fusion to digital but it has not been possible to do digital fusion within the constraints of size, weight and power. Smart fusion algorithms are computationally expensive and require large and power hungry electronics boards to perform realtime fusion.  Because these equipments are meant to be mounted on a close combat weapons (like AK47, MP5) or are helmet mounted or hand held it has not been possible to do fusion within the form factor.

Tonbo Imaging with its vast experience in vision and image processing has managed to build low power and low foot print video electronics with real time smart sensor fusion. Tonbo builds one of its kind digitally fused low lux and thermal imaging sensor based helmet mounted night vision goggles (DUVI) and weapon sights (COBRA) especially designed for close quarter combat. They have not only managed to make these units small and portable but also less power hungry providing battery backup similar to single sensor systems.

DUVI and Cobra are next generation of soldier equipment enhancing his vision capabilities not only in day and night but also in poor weather and illumination conditions. It features a design that fuses imagery from two sensors intelligently without the need of user to switch between day/night mode and presents him best possible image with best features from both the sensors. Both support SD card video and snapshot recording and also with external video ports the digitally fused video can be transmitted wirelessly to command and control.

The comparison of various fusion technologies is listed in the next section and also attached are the product brochures of some of the existing products in the market.

 

Driver Night Vision System 

The most basic requirement of a driver assistive system is that the images produced should be very easy to comprehend, as the driver only has very short time to react. A thermal imager can produce really crisp images of the road at night time but that alone is not sufficient as it makes it difficult for the driver to correlate with the scene that he can see himself. A low light sensor can produce images easy on the eyes but the performance can vary depending upon the amount of light visible, though most low light sensors are good enough to work under star lit conditions. However a Fusion system can achieve both and adjusts automatically to the illumination conditions, thus proving to be the best technology for such applications.

 

Surface Land-mine detection

Thermal imagers have the ability to pick up the difference in temperature profile influenced by the presence of the buried mine. Similarly a multitude of sensors like GPR (Ground Penetrating Radar) and even visible cameras are used to pick up the change in the surfaces physical or thermal. Mine detection algorithms tend to use information from different sensors to reduce the false alarm rate and have best detection rate. Fusion is the most critical component which first fuses information from these sensors and makes it easier for the algorithms to auto detects the presence of foreign objects. It has been observed that probability of detection increases by almost 96% if the algorithms operate on fused imagery instead of decision using level fusion.

 

Long Range Surveillance 

Day and Night Border surveillance and Sea & Air Port surveillance requires the system to work day and night and work even under poor weather and visibility conditions. Traditionally CCDs and cooled thermal Imagers have been used in conjunction and it’s left to the user to detect threats. In surveillance the end user is generally already working at maximum sensory input capacity and correlating the multi sensor data from various sensors makes his job even more difficult. The fusion methods try to provide him best possible view integrated from different sensors, helping in comprehend the scene much more effectively and successfully detect threats. It increases his situational awareness many folds by providing him a single image instead of him manually correlating from a montage of views.

 

Missile Guidance Systems 

Modern seekers use thermal imagers because of its better threat detection ability and the sophisticated ATR algorithms running on the onboard computer inside the missile uses these images to detect target features to keep the missile locked and increases hit ratio. The ATR algorithms though work best if good features on the target are visible. A thermal imager can pick up the presence of the target but does not pick up features as well as a camera operating in visible spectrum of wavelengths (CCD/CMOS). Fusion again plays a major role by combining information from these two sensors in the best possible form. It not only allows the ATR algorithms to pick up the target but also reduces false detection by using features present in the fused image.

 

Unmanned Aerial Vehicles

UAVs have started to find applications in civil and military space. For example UAVs used for agriculture, sensor payload operating in Near Infra red picks up critical information about the health of the crops and visible spectrum is needed for reference. Fusion of both helps users identify the issues at ease. For Military UAVs fusion increases the threat detection ability by bringing out camouflaged objects on ground.

 

color
https://tonboimaging.com/defense/wp-content/themes/hazel/
https://tonboimaging.com/defense/
#d8d8d8
style2
paged
Loading posts...
/home2/tluqermy/public_html/defense/
#
on
none
loading
#
Sort Gallery
on
yes
yes
off
on
on

× Welcome to Tonbo Imaging!