Multispectral image fusion for visual display

Abstract
This paper describes a contrast-based monochromatic fusion process. The fusion process is aimed for on board real time the information content in the combined image, while retaining visual clues that are essential for navigation/piloting tasks. The method is a multi scale fusion process that provides a combination of pixel selection from a single image and a weighing of the two/multiple images. The spectral region is divided into spatial sub bands of different scales and orientations, and within each scale a combination rule for the corresponding pixels taken from the two components is applied. Even when the combination rule is a binary selection the combined fused image may have a combination of pixel values taken from the two components at various scales since it is taken at each scale. The visual band input is given preference in low scale, large features fusion. This fusion process provides a fused image better tuned to the natural and intuitive human perception. This is necessary for pilotage and navigation under stressful conditions, while maintaining or enhancing the targeting detection and recognition performance of proven display fusion methodologies. The fusion concept was demonstrated against imagery from image intensifiers and forward looking IR sensors currently used by the US Navy for navigation and targeting. The approach is easily extendible to more than two bands.