Skip to main content

Infrared and visible image fusion technology based on directionlets transform

Abstract

The article provides an infrared and visible image fusion algorithm based on directionlets transform. The registered original images were decomposed into the low-frequency and high-frequency coefficients by directionlets transform. Getting the mean of the low-frequency coefficients, applying the local variance maximum principle to the high-frequency coefficients, thereby the fusion coefficients of the fused image can be acquired. Finally, the fused image was obtained using inverse directionlets transform. The experiment indicates that the fusion algorithm can extract the original image features better. Compared with the traditional fusion algorithms, the fusion algorithm presented in this article provides better subjective visual effect, and the standard deviation and entropy value would be somewhat increased.

1. Introduction

Infrared and visible image fusion is part of multi-source image fusion. Multi-source image fusion is the process of acquiring unified description towards the formation of high-performance perception system with employing different types of image sensors, and combing two of more kinds of image information effectively. It is the technology with comprehensive and optimized treatment for multi-information’s acquisition, presentation, and internal relations [1].

Because the approaches of the infrared detectors to obtain measured wavelength range of the target information are different and with different imaging modalities, the infrared image and the visible image’s information are very different and complementary. Through the fusion of the infrared image and visible image, plus synthesizing the complementary and redundant information of the two types of images, the fused image’s object’s contour can be clearer than the original one with the characteristics of information-rich, and easy to identify, and the image sensor’s perception towards the environment can also be extended.

Image fusion is mainly divided into three levels, namely the pixel-level fusion, feature fusion, and decision-level fusion. This article only focuses on the pixel-level fusion. There are many existing pixel-level image fusion methods, including the weighted average method [2], the tower decomposition method [3], principal component analysis (PCA) method [4] as well as fusion based on wavelet transform methods. The wavelet transform has become an important tool in image fusion method for its excellent feature of time frequency analysis [5]. However, the advantages of wavelet transform are mainly embodied in the analysis and processing of the one-dimensional piecewise smooth function or bounded variation function. When the wavelet transform was applied to two-dimensional or higher-dimensional domain, the one-dimensional separable wavelet which is spanned by the one-dimensional wavelet has limited direction; therefore, it cannot have optimal representation of high-dimensional function containing the line or surface singularity. Therefore, the wavelet transform can only reflect the signal point singularity, as far as the lines singularity and surface singularity contained in the linear and edge characteristics of the two-dimensional image is concerned, they are hard to be reflected with the wavelet transform approach. In the multi-resolution decomposition fusion algorithm, the adoptions of the wavelet transform fusion algorithms tend to introduce high-frequency noise, which would affect the quality of the fused image.

For the above-mentioned drawbacks of the wavelet transform, directionlets transform is anisotropic transform proposed by Velisavljevic et al. [6], which is based on integer lattice. The directionlets still use the one-dimensional filter group, but with the base function of multi-directional anisotropy, the directionlets have a detachable filter and critical structure, and are able to be fully reconstructed, thus, theoretically it has more advantages than the general wavelet transform, and the other second generation wavelet transform [6].

This article has applied directionlets transform to image fusion experiments. The test results show that it can blend the edges of the image information fairly well, and subjectively more in line with human visual characteristics and the objective evaluation is also superior to other image fusion method.

2. Directionlet transform

The directionlets transform proposed by Velisavljevic et al. [7–9] is the multi-directional anisotropy based upon the integer lattice. It adopts multi-directional anisotropy basis functions, therefore, it has more advantages in expressing the image than the average wavelet transform. At the same time, it only uses the one-dimensional filter banks with separable filtering and critical structures, and can be reconstructed totally, thus, as far as the computational complexity is concerned, it has more advantage than other second generation wavelet transform. The directionlets transform is a new multi-scale analysis tool.

When using one-dimensional filter banks to conduct multi-directional two-dimensional separable wavelet transform, we select any two rational slope r1 = b1/a1 and r2 = b 2 /a2’s digital line direction to filtering and down-sampling; however, when the critical sampling is enhanced, two digital lines will have the issue of direction of mutual inductance, that is, along the slope r1 and r2, the concept of the digital line cannot provide a systematic rule for the down-sampling of the repeated filtering and repeat sampling.

Therefore, Velisavljevic has proposed the multi-directional filtering and down-sampling which are based on lattice. First, choose any two reasonable slopes r1 = b1/a1 and r2 = b2/a2’s directions in grid space z2, expressed in matrix as

M Λ = a 1 b 1 a 2 b 2 = d 1 d 2 , a 1 , a 2 , b 1 , b 2 ∈ Z , n 1 ≠ n 2 .
(1)

The direction along the slope r1 of the vector r1 is the change of direction; the direction along the slope r2 of the vector d2 is called the queue direction. Along the skewed collinear transform of the transformation of the lattice in the queue application, it has n1 and n2 (n1 ≠ n2) transformation in an iterative steps along the transform direction and queue direction, and it has been marked as S-AWT(M Λ , n1, n2) (Anisotropic Wavelet Transform, AWT). From M Λ , the integer lattice Λ can be ascertained. According to the case theory, z2 has been divided into the | det M Λ |’s co-set which is about the entire integer lattice Λ. The filtering and down-sampling have been conducted in every co-set, and then the remaining pixels belong to the lattice Λ′ of integer lattice Λ, the matrix

M ′ Λ

generated accordingly. Thereout, sparse representation of the anisotropic object on the direction of the image can be obtained. The principle is shown as in Figure 1 (the change of direction in the figure is 45°).

Figure 1
figure 1

Based on the integer lattice’s filtering and down sampling. (a) Expressed with the generator matrix, (b) two-dimensional dual-channel filter.

The image which has gone through the above-mentioned directionlets transform has a very sparse coefficient, and then can obtain more directional information, which can be better used to describe the edge contour of the infrared image.

3. The infrared and visible image fusion algorithm based on directionlets

The article has introduced the directionlets into fusion of the infrared and visible image. The characters of the directionlets can be better used in extracting the geometric features of the source image and provide more information for the fused image. The advantage of applying the multi-scale directional analysis theory in image fusion is that the image can be decomposed into different scale and sub-band. Therefore, when the fusion is processing, the different scales and different directional sub-bands can adopt different fusion rules. The better fused effect can be achieved through sufficiently digging original multi-source image’s visual information.

The high-frequency sub-band after the directionlets decomposition contains a lot of high-frequency information of the image. The bigger coefficient’s absolute value corresponds to the certain directional interval’s significant characteristic, for example, marginal, linear, regional boundary, etc. The coefficients can better depict the image’s structure’s information, and have great influence upon human’s vision. The low-frequency sub-band contains most of low-frequency information of the image, and is the primary perception part of the human’s eyes to the image content. The article judges and processes the fusion according to the characteristics of the low-frequency sub-band and the high-frequency sub-band with corresponding fusion rules, and the sub-band coefficient would thereby be acquired.

According to the human visual system’s characteristics, we can know that the human’s eyes are not sensitive to the individual pixel’s gray value [10]. The distinctness of the image is decided by the all the pixels in certain region. To improve the fused image’s clearness, the pixel’s regional feature should be considered in the design of fusion algorithm. Therefore, the coefficient with the bigger regional variance value should be adopted as the fused image’s high-frequency sub-band coefficient in directionlets transform.

The specific fusion rules are shown in the following:

  1. 1

    The directionlets decomposition is applied in the visible image V and the infrared image I. The high-frequency sub-band V H and I H, and the low-frequency sub-band V L and I L.

  2. 2

    The coefficient of the low-frequency sub-band remains constant. The article chooses two images’ average value of the coefficient of the low-frequency sub-band as the fused image’s low-frequency sub-band coefficient. Suppose: the low-frequency sub-band’s coefficient as F L, then

    F L = V L + I L 2
    (2)
  3. 3

    As for the high-frequency sub-band’s coefficient, the maximum principle of the local variance has been adopted, that is to say, in the transform domain, calculating the corresponding point N × N neighborhood’s local variance C X (X as V or I), and choosing the highest coefficient of the variance as the fused image’s corresponding point’s coefficient.

    { F L = V L , C V ≥ C I F L = I L , C V < C I
    (3)
  4. 4

    The directionlets inverse transformation has been applied to the fused image’s coefficient, and we get the fused image F.

4. Experimental results and analysis

The experiments selected the infrared and visible registration images to conduct the fusion experiment with different approaches. Figure 2a,b shows, respectively, represent the infrared and visible images of the airfield, and the two images contain much detail and texture information; Figure 2c represents the fusion result based on the Laplacian pyramid transform; Figure 2d represents the fusion result based on the fusion of the wavelet transform; Figure 2e represents the fusion results based on directionlets transform. The area size is of 5 × 5 pixels, and the DWT and LP decomposition are three layers.

Figure 2
figure 2

Fusion experiment. (a) Infrared image. (b) Visible image. (c) LP. (d) DWT. (e) Directionlets.

As can be seen from the figure, the images (c) and (d) have different degrees of blur, such as the marginal information of the runways and the outline of the aircraft is not clear, compared to (c) and (d), the image (e) is clearer as far as the visual effect is concerned. For example, the image (e) contours of aircraft and distant details such as trees and buildings look more clearly.

Table 1 is an objective evaluation towards the quality of the images in this set of experiments. As can be seen from the table, the standard deviation and the average gradient of the image (e) are the highest, which demonstrate that the image (e) having better contrast and sharpness, and therefore is consistent with the subjective evaluation results.

Table 1 Comparison of statistical parameters about fusion results according to different fusion rules

5. Conclusions

The article puts forward an infrared and visible fusion algorithm based on the directionlets transform. Compared with the traditional wavelet transform, directionlets can preserve the infrared and visible image’s feature information, enhance the fused image’s space detail representation ability, and improve the fused image’s information. The experiment proves that the approach in this article is better than the wavelet transform and Laplace pyramid decomposition, etc.

References

  1. Zhou X, Liu R-A, Chen J: Infrared and visible image fusion enhancement technology based on multi-scale directional analysis. IEEE Comput. Soc 2009, 1-3.

    Google Scholar 

  2. Hall DL, Linas J: An introduction to multisencor data fusion. Proc IEEE 1997, 85(10):6-23.

    Article  Google Scholar 

  3. Toet A, Ruyven LV, Velaton J: Merging thermal and visual images by a contrast pyramid. Opt. Eng. 1989, 28(7):789-792.

    Article  Google Scholar 

  4. Yonghong J: Fusion of landsat TM and SAR image based on principal component analysis. Remote Sens. Technol. Appl. 1998, 13(1):4649-4654.

    Google Scholar 

  5. Lin YC, Liu QH: An image fusion algorithm based on directionlet transform. Nanotechnol. Precision Eng. 2010, 8(6):565-568.

    Google Scholar 

  6. Velisavljevic V, Beferull-Lozano B, Vetterli M: Directionlets: anisotropic multi-directional representation with separable filtering. IEEE Trans. Image Process. 2006, 15(7):1916-1933.

    Article  MathSciNet  Google Scholar 

  7. Velisavljevic V: Low-complexity iris coding and recognition based on directionlets. IEEE Trans. Inf. Forens. Secur. 2009, 4(3):410-417.

    Article  Google Scholar 

  8. Velisavljevic V, Beferull-Lozano B, Vetterli M: Space-frequency quantization for image compression with directionlets. IEEE Trans. Image Process. 2007, 16(7):1761-1773.

    Article  MathSciNet  Google Scholar 

  9. Velisavljevic V, Beferull-Lozano B, Vetterli M: Efficient image compression using directionlets, in. 6th International Conference on Information, Communications & Signal Processing 2007, 1-5.

    Google Scholar 

  10. Yang L: B-l Guo, W Ni. Multifocus image fusion algorithm based on region statistics in contourlet domain. J. Xi'an Jiaotong Univ. 2007, 41(4):448-452.

    Google Scholar 

Download references

Acknowledgment

The authors are grateful to the anonymous referees for constructive comments. This study was funded by the Tianjin Normal University Doctoral Fund (52X09008, 52LX14).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xin Zhou.

Additional information

Competing interest

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zhou, X., Yin, X., Liu, RA. et al. Infrared and visible image fusion technology based on directionlets transform. J Wireless Com Network 2013, 42 (2013). https://doi.org/10.1186/1687-1499-2013-42

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2013-42

Keywords