 Research
 Open Access
 Published:
Infrared and visible image fusion technology based on directionlets transform
EURASIP Journal on Wireless Communications and Networking volume 2013, Article number: 42 (2013)
Abstract
The article provides an infrared and visible image fusion algorithm based on directionlets transform. The registered original images were decomposed into the lowfrequency and highfrequency coefficients by directionlets transform. Getting the mean of the lowfrequency coefficients, applying the local variance maximum principle to the highfrequency coefficients, thereby the fusion coefficients of the fused image can be acquired. Finally, the fused image was obtained using inverse directionlets transform. The experiment indicates that the fusion algorithm can extract the original image features better. Compared with the traditional fusion algorithms, the fusion algorithm presented in this article provides better subjective visual effect, and the standard deviation and entropy value would be somewhat increased.
1. Introduction
Infrared and visible image fusion is part of multisource image fusion. Multisource image fusion is the process of acquiring unified description towards the formation of highperformance perception system with employing different types of image sensors, and combing two of more kinds of image information effectively. It is the technology with comprehensive and optimized treatment for multiinformation’s acquisition, presentation, and internal relations [1].
Because the approaches of the infrared detectors to obtain measured wavelength range of the target information are different and with different imaging modalities, the infrared image and the visible image’s information are very different and complementary. Through the fusion of the infrared image and visible image, plus synthesizing the complementary and redundant information of the two types of images, the fused image’s object’s contour can be clearer than the original one with the characteristics of informationrich, and easy to identify, and the image sensor’s perception towards the environment can also be extended.
Image fusion is mainly divided into three levels, namely the pixellevel fusion, feature fusion, and decisionlevel fusion. This article only focuses on the pixellevel fusion. There are many existing pixellevel image fusion methods, including the weighted average method [2], the tower decomposition method [3], principal component analysis (PCA) method [4] as well as fusion based on wavelet transform methods. The wavelet transform has become an important tool in image fusion method for its excellent feature of time frequency analysis [5]. However, the advantages of wavelet transform are mainly embodied in the analysis and processing of the onedimensional piecewise smooth function or bounded variation function. When the wavelet transform was applied to twodimensional or higherdimensional domain, the onedimensional separable wavelet which is spanned by the onedimensional wavelet has limited direction; therefore, it cannot have optimal representation of highdimensional function containing the line or surface singularity. Therefore, the wavelet transform can only reflect the signal point singularity, as far as the lines singularity and surface singularity contained in the linear and edge characteristics of the twodimensional image is concerned, they are hard to be reflected with the wavelet transform approach. In the multiresolution decomposition fusion algorithm, the adoptions of the wavelet transform fusion algorithms tend to introduce highfrequency noise, which would affect the quality of the fused image.
For the abovementioned drawbacks of the wavelet transform, directionlets transform is anisotropic transform proposed by Velisavljevic et al. [6], which is based on integer lattice. The directionlets still use the onedimensional filter group, but with the base function of multidirectional anisotropy, the directionlets have a detachable filter and critical structure, and are able to be fully reconstructed, thus, theoretically it has more advantages than the general wavelet transform, and the other second generation wavelet transform [6].
This article has applied directionlets transform to image fusion experiments. The test results show that it can blend the edges of the image information fairly well, and subjectively more in line with human visual characteristics and the objective evaluation is also superior to other image fusion method.
2. Directionlet transform
The directionlets transform proposed by Velisavljevic et al. [7–9] is the multidirectional anisotropy based upon the integer lattice. It adopts multidirectional anisotropy basis functions, therefore, it has more advantages in expressing the image than the average wavelet transform. At the same time, it only uses the onedimensional filter banks with separable filtering and critical structures, and can be reconstructed totally, thus, as far as the computational complexity is concerned, it has more advantage than other second generation wavelet transform. The directionlets transform is a new multiscale analysis tool.
When using onedimensional filter banks to conduct multidirectional twodimensional separable wavelet transform, we select any two rational slope r_{1} = b_{1}/a_{1} and r_{2} = b_{ 2 }/a_{2}’s digital line direction to filtering and downsampling; however, when the critical sampling is enhanced, two digital lines will have the issue of direction of mutual inductance, that is, along the slope r_{1} and r_{2}, the concept of the digital line cannot provide a systematic rule for the downsampling of the repeated filtering and repeat sampling.
Therefore, Velisavljevic has proposed the multidirectional filtering and downsampling which are based on lattice. First, choose any two reasonable slopes r_{1} = b_{1}/a_{1} and r_{2} = b_{2}/a_{2}’s directions in grid space z^{2}, expressed in matrix as
The direction along the slope r_{1} of the vector r_{1} is the change of direction; the direction along the slope r_{2} of the vector d_{2} is called the queue direction. Along the skewed collinear transform of the transformation of the lattice in the queue application, it has n_{1} and n_{2} (n_{1} ≠ n_{2}) transformation in an iterative steps along the transform direction and queue direction, and it has been marked as SAWT(M_{ Λ }, n_{1}, n_{2}) (Anisotropic Wavelet Transform, AWT). From M_{ Λ }, the integer lattice Λ can be ascertained. According to the case theory, z^{2} has been divided into the  det M_{ Λ }’s coset which is about the entire integer lattice Λ. The filtering and downsampling have been conducted in every coset, and then the remaining pixels belong to the lattice Λ^{′} of integer lattice Λ, the matrix
generated accordingly. Thereout, sparse representation of the anisotropic object on the direction of the image can be obtained. The principle is shown as in Figure 1 (the change of direction in the figure is 45°).
The image which has gone through the abovementioned directionlets transform has a very sparse coefficient, and then can obtain more directional information, which can be better used to describe the edge contour of the infrared image.
3. The infrared and visible image fusion algorithm based on directionlets
The article has introduced the directionlets into fusion of the infrared and visible image. The characters of the directionlets can be better used in extracting the geometric features of the source image and provide more information for the fused image. The advantage of applying the multiscale directional analysis theory in image fusion is that the image can be decomposed into different scale and subband. Therefore, when the fusion is processing, the different scales and different directional subbands can adopt different fusion rules. The better fused effect can be achieved through sufficiently digging original multisource image’s visual information.
The highfrequency subband after the directionlets decomposition contains a lot of highfrequency information of the image. The bigger coefficient’s absolute value corresponds to the certain directional interval’s significant characteristic, for example, marginal, linear, regional boundary, etc. The coefficients can better depict the image’s structure’s information, and have great influence upon human’s vision. The lowfrequency subband contains most of lowfrequency information of the image, and is the primary perception part of the human’s eyes to the image content. The article judges and processes the fusion according to the characteristics of the lowfrequency subband and the highfrequency subband with corresponding fusion rules, and the subband coefficient would thereby be acquired.
According to the human visual system’s characteristics, we can know that the human’s eyes are not sensitive to the individual pixel’s gray value [10]. The distinctness of the image is decided by the all the pixels in certain region. To improve the fused image’s clearness, the pixel’s regional feature should be considered in the design of fusion algorithm. Therefore, the coefficient with the bigger regional variance value should be adopted as the fused image’s highfrequency subband coefficient in directionlets transform.
The specific fusion rules are shown in the following:

1
The directionlets decomposition is applied in the visible image V and the infrared image I. The highfrequency subband V _{H} and I _{H}, and the lowfrequency subband V _{L} and I _{L}.

2
The coefficient of the lowfrequency subband remains constant. The article chooses two images’ average value of the coefficient of the lowfrequency subband as the fused image’s lowfrequency subband coefficient. Suppose: the lowfrequency subband’s coefficient as F _{L}, then
$${F}_{\mathrm{L}}=\frac{{V}_{\mathrm{L}}+{I}_{\mathrm{L}}}{2}$$(2) 
3
As for the highfrequency subband’s coefficient, the maximum principle of the local variance has been adopted, that is to say, in the transform domain, calculating the corresponding point N × N neighborhood’s local variance C _{ X }(X as V or I), and choosing the highest coefficient of the variance as the fused image’s corresponding point’s coefficient.
$$\{\begin{array}{c}\hfill {\mathit{F}}_{\mathrm{L}}={V}_{\mathrm{L}},\phantom{\rule{1em}{0ex}}\left{C}_{V}\right\ge \left{C}_{I}\right\hfill \\ \hfill {F}_{\mathrm{L}}={I}_{\mathrm{L}},\phantom{\rule{2em}{0ex}}\left{C}_{V}\right<\left{C}_{I}\right\hfill \end{array}$$(3) 
4
The directionlets inverse transformation has been applied to the fused image’s coefficient, and we get the fused image F.
4. Experimental results and analysis
The experiments selected the infrared and visible registration images to conduct the fusion experiment with different approaches. Figure 2a,b shows, respectively, represent the infrared and visible images of the airfield, and the two images contain much detail and texture information; Figure 2c represents the fusion result based on the Laplacian pyramid transform; Figure 2d represents the fusion result based on the fusion of the wavelet transform; Figure 2e represents the fusion results based on directionlets transform. The area size is of 5 × 5 pixels, and the DWT and LP decomposition are three layers.
As can be seen from the figure, the images (c) and (d) have different degrees of blur, such as the marginal information of the runways and the outline of the aircraft is not clear, compared to (c) and (d), the image (e) is clearer as far as the visual effect is concerned. For example, the image (e) contours of aircraft and distant details such as trees and buildings look more clearly.
Table 1 is an objective evaluation towards the quality of the images in this set of experiments. As can be seen from the table, the standard deviation and the average gradient of the image (e) are the highest, which demonstrate that the image (e) having better contrast and sharpness, and therefore is consistent with the subjective evaluation results.
5. Conclusions
The article puts forward an infrared and visible fusion algorithm based on the directionlets transform. Compared with the traditional wavelet transform, directionlets can preserve the infrared and visible image’s feature information, enhance the fused image’s space detail representation ability, and improve the fused image’s information. The experiment proves that the approach in this article is better than the wavelet transform and Laplace pyramid decomposition, etc.
References
 1.
Zhou X, Liu RA, Chen J: Infrared and visible image fusion enhancement technology based on multiscale directional analysis. IEEE Comput. Soc 2009, 13.
 2.
Hall DL, Linas J: An introduction to multisencor data fusion. Proc IEEE 1997, 85(10):623.
 3.
Toet A, Ruyven LV, Velaton J: Merging thermal and visual images by a contrast pyramid. Opt. Eng. 1989, 28(7):789792.
 4.
Yonghong J: Fusion of landsat TM and SAR image based on principal component analysis. Remote Sens. Technol. Appl. 1998, 13(1):46494654.
 5.
Lin YC, Liu QH: An image fusion algorithm based on directionlet transform. Nanotechnol. Precision Eng. 2010, 8(6):565568.
 6.
Velisavljevic V, BeferullLozano B, Vetterli M: Directionlets: anisotropic multidirectional representation with separable filtering. IEEE Trans. Image Process. 2006, 15(7):19161933.
 7.
Velisavljevic V: Lowcomplexity iris coding and recognition based on directionlets. IEEE Trans. Inf. Forens. Secur. 2009, 4(3):410417.
 8.
Velisavljevic V, BeferullLozano B, Vetterli M: Spacefrequency quantization for image compression with directionlets. IEEE Trans. Image Process. 2007, 16(7):17611773.
 9.
Velisavljevic V, BeferullLozano B, Vetterli M: Efficient image compression using directionlets, in. 6th International Conference on Information, Communications & Signal Processing 2007, 15.
 10.
Yang L: Bl Guo, W Ni. Multifocus image fusion algorithm based on region statistics in contourlet domain. J. Xi'an Jiaotong Univ. 2007, 41(4):448452.
Acknowledgment
The authors are grateful to the anonymous referees for constructive comments. This study was funded by the Tianjin Normal University Doctoral Fund (52X09008, 52LX14).
Author information
Additional information
Competing interest
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Received
Accepted
Published
DOI
Keywords
 Infrared image
 Visible image
 Image fusion
 Directionlets transform