 Research
 Open Access
 Published:
Compressive sensing image fusion algorithm based on directionlets
EURASIP Journal on Wireless Communications and Networking volume 2014, Article number: 19 (2014)
Abstract
This paper presented a new image fusion based on compressed sensing (CS). The method decomposes two or more original images using directionlet transform, gets the sparse matrix by the directionlet coefficient sparse representation, and fuses the sparse matrices with the coefficient absolute value maximum scheme. The compressed sample can be obtained through random observation. The fused image is recovered from the reduced samples by solving the optimization. The study demonstrates that the compressive sensing image fusion algorithm based on directionlets has a number of perceived advantages. The simulations show that the proposed algorithm has the advantages of simple structure and easy implementation and also can achieve a better fusion performance.
1. Introduction
The main goal of image fusion is to extract all the important features from all input images and integrate them to form a fused image which is more informative and suitable for human visual perception or computer processing.
There are a number of pixellevel image fusion methods, including the weighted average method [1, 2], the pyramid transform method [3], principal component analysis (PCA) method [4], as well as fusion based on wavelet transform methods. The wavelet transform has become an important tool in image fusion method for its excellent feature of timefrequency analysis [5]. However, wavelet bases are isotropic and of limited directions and fail to represent high anisotropic edges and contours in images well. For the drawbacks of the wavelet transform, directionlet transform is an anisotropic transform proposed by Vladan, which is based on the integer lattice. The directionlet still uses the onedimensional filter group, but with the base function of multidirectional anisotropy, the directionlet has a detachable filter and critical structure and is able to be fully reconstructed. Thus, theoretically, it has more advantages than the general wavelet transform and the other secondgeneration wavelet transform [6].
In recent years, inspired by the ideas of 'sparse’ approximation, a novel theory called compressed sensing (CS) has been developed [7–9]. The CS principle claims that if a signal is compressive or sparse in a certain transform domain, it can be projected onto a lowdimensional space using a measurement matrix which is irrelevant with transform basis while still enabling reconstruction at high probability from these small numbers of random linear measurements via solving an optimization problem. Therefore, it is expected to provide a new idea for image fusion by combined directionlets with CS.
This article proposes a new scheme for image fusion; in our scheme, directionlet transform first decomposes each source image into two components, i.e., dense and sparse components. Then, the dense components are fused by the selection method according to the manifestations of defocus, while the sparse components are fused under the frame of CS via fusing a few linear measurements by solving the problem of l_{1} norm minimization which is based on the twostep iterative shrinkage reconstruction algorithm. The proposed fusion scheme is applied to infrared and visible image fusion experiments, and the performance is evaluated in terms of computational efficiency, visual quality, and quantitative criterion.
The test results show that it can blend the edges of the image information fairly well, and subjectively more in line with human visual characteristics and the objective evaluation is also superior to other image fusion methods.
2. Directionlet transform
The directionlet transform proposed by the German researcher Vladan is the multidirectional anisotropy based upon the integer lattice [10–12]. It adopts multidirectional anisotropy basis functions; therefore, it has more advantages in expressing the image than the average wavelet transform. At the same time, it only uses the onedimensional filter banks with separable filtering and critical structures and can be reconstructed totally; thus, as far as the computational complexity is concerned, it has more advantage than the other secondgeneration wavelet transform. The directionlet transform is a new multiscale analysis tool.
When using onedimensional filter banks to conduct multidirectional twodimensional separable wavelet transform, we select any two rational slope r_{1} = b_{1}/a_{1} and r_{2} = b_{2}/a_{2}'s digital line direction to filtering and downsampling; however, when the critical sampling is enhanced, two digital lines will have the issue of direction of mutual inductance, that is, along the slope r_{1} and r_{2}, the concept of the digital line cannot provide a systematic rule for the downsampling of the repeated filtering and repeated sampling.
Therefore, Vladan has proposed the multidirectional filtering and downsampling which are based on the lattice. First, chose any two reasonable slopes r_{1} = b_{1}/a_{1} and r_{2} = b_{2}/a_{2}'s directions in grid space z^{2}, expressed in matrix as
The direction along the slope r_{1} of the vector r_{1} is called the change of direction; the direction along the slope r_{2} of the vector d_{2} is called the queue direction. Along the skewed collinear transform of the transformation of the lattice in the queue application, it has n_{1} and n_{2} (n_{1} ≠ n_{2}) transformation in an iterative step along the transform direction and queue direction. Marked as S  AWT(M_{ Λ }, n_{1}, n_{2}). From M_{ Λ }, the integer lattice Λ can be ascertained. According to the case theory, z^{2} has been divided into the det M_{ Λ }'s coset which is about the entire integer lattice Λ. Filtering and downsampling have been conducted in every coset and then the remaining pixels belong to the lattice Λ′ of the integer lattice Λ and the matrix generated accordingly. Thereout, the sparse representation of the anisotropic object on the direction of the image can be obtained. The principle is shown in Figure 1 (the change of direction in the figure is 45°).
The image which has gone through the above mentioned directionlet transform has a very sparse coefficient and then can obtain more directional information, which can be better used to describe the edge contour of the infrared image.
3. Compressive sensing and image fusion
Compressive sensing enables a sparse or compressible signal to be reconstructed from a small number of nonadaptive linear projections, thus significantly reducing the sampling and computation costs [13]. CS has many promising applications in signal acquisition, compression, and medical imaging. In this paper, we investigate its potential application in the image fusion.
As far as a realvalued finitelength onedimensional discretetime signal x is concerned, it can be viewed as a R^{N} space N × 1 dimensional column vector, and the element is x[n], n = 1, 2, …, n. If the signal is sparse K, it can be shown as the following formula:
where ψ is the N × N matrix and s is the coefficient component column vector of dimension N × 1.
When the signal x in the base of ψ has only nonzero coefficients of K < < N (or greater than zero coefficients), ψ is called the sparse base of the signal x. The CS theory indicates that if the signal x's (the length is N) transform coefficient which is at an orthogonal basis ψ is sparse (that is, only a small number of nonzero coefficients can be obtained), if these coefficients are projected into the measurement basic ϕ which is irrelevant to the sparse base ψ, the M × 1 dimensional measurement signal y can be obtained. By this approach, the signal x's compressed sampling can be realized. The expression can be expressed as
where ϕ is the measurement matrix of M × N and Θ = ϕψ is the M × N matrix and called the projection matrix. y is the measurement value of the projection matrix Θ, which is relevant to the sparse signal s. Only when the orthogonal basis ψ is irrelevant to the measurement matrix ϕ, that is to say, the projection matrix can satisfy the requirement of restricted isometry property (RIP), the signal x can be accurately recovered via these measured value by solving formula (3) in the best optimized way. The block diagram derived from the CS theory for the field of image processing is shown in Figure 1.
The advantage that the CS theory has is that the data obtained via the projection measurement is much smaller than the conventional sampling methods, breaking the bottleneck of the Shannon sampling theorem, so that the highresolution signal acquisition becomes possible. The attraction of CS theory is that it is for applications in many fields of science and engineering and has important implications and practical significance, such as statistics, information theory, coding theory, computer science theory, and other theories.
Compared with the traditional fusion algorithms, the CSbased image fusion algorithm theory has shown significant superiority: the image fusion can be conducted in the nonsampling condition of the image with the CS technique, the quality of image fusion can be improved by increasing the number of measurements, and this algorithm can save storage space and reduce the computational complexity. The main ideas of the CSbased image fusion algorithm theory are as follows: first of all, the two images which need to deal with should undergo the directionlet transform, the sparse matrix can be obtained after the directionlet coefficients are processed with the sparse treatment, then the fusion rules for the sparse matrix integration are determined, compressive sampling through random sampling matrix is obtained, and finally, the fused image can be obtained in the best optimized way.
The practical function of wavelet transform is the signal decorrelation, and all the information of the signal are concentrated into the wavelet coefficients with large amplitude. These large wavelet coefficients contain far more energy than that contained in small coefficient so that in the reconstruction of the signal, a large coefficient is more important than the smaller one.
This paper adopted the fusion rule which concentrated on the larger absolute value; comparing two wavelet coefficients of the same location in two images, the greater absolute value is selected as the fusion wavelet coefficient. The expression is as follows:
where D_{ f } is the fusion wavelet coefficient, D_{ M } is the wavelet coefficient whose absolute value is the largest of the wavelet coefficients in the same location in different images, and I is the number of the source image.
The directionlet transform is used to deal with the source image; directionlet coefficients are obtained with the sparse treatment: the small coefficient (or coefficient of close to zero) is set to zero to obtain an approximate sparse coefficient matrix.
When the source image is conducted via sparse transformation, the wavelet is used as the sparse basis. To reconstruct the image with less measurement value, we must ensure that the sparse basis ψ and the measurement matrix ϕ are irrelevant, because any random sparse matrix has superiority that it is irrelevant to any sparse basis. That is the reason why it can be used as a measurement measure matrix.
The concrete realization of image fusion algorithm which is based on the CS theory is as follows:

1.
For each m × n pixel image, conduct directionlet transform to obtain the directionlet coefficient matrix.

2.
The directionlet coefficients are processed with the sparse treatment and then fused according to the larger absolute value rule.

3.
For the fused directionlet coefficients, the random matrix is selected as the measurement matrix ϕ; after the measurement, the measured value y can be obtained.

4.
By solving the linear programming of the l _{1} norm, the approximate solution $\widehat{x}$ can be acquired.

5.
Conduct the inverse transform to the obtained directionlet coefficients and thus the fusion image can be acquired.
4. Experimental results and analysis
The experiments selected the infrared and visible registration images to conduct the fusion experiment with different approaches. Figure 2 shows image (a) and image (b) which respectively represent the infrared and visible images of the airfield, and the two images contain much detail and texture information. Image (c) represents the fusion result based on the Laplacian pyramid transform, image (d) represents the fusion result based on the fusion of the discrete wavelet transform, and image (e) represents the fusion results based on CS. As can be seen from the figure, images (c) and (d) have different degrees of blur. Compared to images (c) and (d), image (e) is clearer as far as the visual effect is concerned.
Table 1 is an objective evaluation towards the quality of the images in this set of experiments. As can be seen from the table, the standard deviation and the average gradient of image (e) are the highest, which demonstrate that the image has better contrast and sharpness, and therefore is consistent with the subjective evaluation results. Figure 3 is the mutual information of different sampling rates for infrared and visible image fusion. The mutual information value shows the similarity between the fusion image and the source image. The sampling rate is associated with the fusion image quality [14]. The sampling rate is higher, and the fusion image quality is better; however, the cost of operation of the reconstructed image is larger, and the required storage space is bigger.
As can be seen from Figure 3, the mutual information values of the compressive sensing image fusion algorithm are the best among the three fusion methods.
Figure 4 is the multifocus image fusion experiment. Image (a) is the infocus image, and image (b) is the farfocus image. Image (c) represents the fusion result based on the discrete wavelet transform; image (d) represents the fusion results based on CS. As can be seen from the figure, image (c) has different degrees of blur. Compared to image (c), image (d) is clearer as far as the visual effect is concerned. Figure 5 is the mutual information of different sampling rates for multifocus image fusion. The mutual information values of the compressive sensing image fusion algorithm are the best among the three fusion methods.
5. Conclusions
The paper put forward a fusion algorithm based on the compressed sensing. Compared with the traditional wavelet transform, the proposed CSbased image fusion algorithm can preserve the image feature information, enhance the fused image space detail representation ability, and improve the fused image information. The experiment proves that the approach in this paper is better than the wavelet transform and Laplace pyramid decomposition, etc.
References
 1.
Zhou X, Liu RA, Chen J: Infrared and visible image fusion enhancement technology based on multiscale directional analysis. IEEE Computer Society. Piscataway: IEEE; 2009.
 2.
Hall DL, Linas J: An introduction to multisensor data fusion. Proceedings of the IEEE 1997, 85(10):623.
 3.
Toet A, van Ruyven LJ, Velaton JM: Merging thermal and visual images by a contrast pyramid. Optical Engineering 1989, 28(7):789792.
 4.
Yonghong J: Fusion of landsat TM and SAR images based on principal component analysis. Remote Sensing Technology and Application 1998, 13(1):46494654.
 5.
Lin YC, Liu QH: An image fusion algorithm based on directionlet transform. Nanotechnology and Precision Engineering 2010, 8(6):565568.
 6.
Velisavljevic V, BeferullLozano B, Vetterli M: Directionlets: anisotropic multidirectional representation with separable filtering. IEEE Transactions on Image Processing 2006, 15(7):19161933.
 7.
Jin Wei F, Randi YM: Multifocus fusion using dualtree contourlet and compressed sensing. OptoElectronic Engineering 2011, 38(4):8794.
 8.
Candes E, Wakin MB: An introduction to compressive sampling. IEEE Signal Processing Magazine 2008, 48(4):2130.
 9.
Provost F, Lesage F: The application of compressed sensing for photoacoustic tomography. IEEE Trans. on Med. Imaging 2009, 28(4):585594.
 10.
Velisavljevic V: Lowcomplexity iris coding and recognition based on directionlets. IEEE Transactions on Information Forensics and Security 2009, 4(3):410417.
 11.
Velisavljevic V, BeferullLozano B, Vetterli M: Spacefrequency quantization for image compression with directionlets. IEEE Transactions on Image Processing 2007, 16(7):17611773.
 12.
Velisavljevic V, BeferullLozano B, Vetterli M: Efficient image compression using directionlets. 2007 6th International Conference Information, Communications & Signal Processing. Piscataway: IEEE; 2007:15.
 13.
Wan T, Canagarajah N, Achim A: Compressive image fusion. IEEE International Conference on Image Processing. Piscataway: IEEE; 2008:130811.
 14.
Huang XS, Dai QF, Cao YQ: Compressive sensing image fusion algorithm based on wavelet sparse basis. Application Research of Computers 2012, 29(9):35813583.
Acknowledgements
The authors are grateful to the anonymous referees for the constructive comments.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Zhou, X., Wang, W. & Liu, R. Compressive sensing image fusion algorithm based on directionlets. J Wireless Com Network 2014, 19 (2014) doi:10.1186/16871499201419
Received:
Accepted:
Published:
Keywords
 Image fusion
 Compressive sensing
 Directionlet transform