# Compressive sensing image fusion algorithm based on directionlets

- Xin Zhou
^{1}Email author, - Wei Wang
^{1}and - Rui-an Liu
^{1}

**2014**:19

https://doi.org/10.1186/1687-1499-2014-19

© Zhou et al.; licensee Springer. 2014

**Received: **30 December 2013

**Accepted: **24 January 2014

**Published: **1 February 2014

## Abstract

This paper presented a new image fusion based on compressed sensing (CS). The method decomposes two or more original images using directionlet transform, gets the sparse matrix by the directionlet coefficient sparse representation, and fuses the sparse matrices with the coefficient absolute value maximum scheme. The compressed sample can be obtained through random observation. The fused image is recovered from the reduced samples by solving the optimization. The study demonstrates that the compressive sensing image fusion algorithm based on directionlets has a number of perceived advantages. The simulations show that the proposed algorithm has the advantages of simple structure and easy implementation and also can achieve a better fusion performance.

## Keywords

## 1. Introduction

The main goal of image fusion is to extract all the important features from all input images and integrate them to form a fused image which is more informative and suitable for human visual perception or computer processing.

There are a number of pixel-level image fusion methods, including the weighted average method [1, 2], the pyramid transform method [3], principal component analysis (PCA) method [4], as well as fusion based on wavelet transform methods. The wavelet transform has become an important tool in image fusion method for its excellent feature of time-frequency analysis [5]. However, wavelet bases are isotropic and of limited directions and fail to represent high anisotropic edges and contours in images well. For the drawbacks of the wavelet transform, directionlet transform is an anisotropic transform proposed by Vladan, which is based on the integer lattice. The directionlet still uses the one-dimensional filter group, but with the base function of multi-directional anisotropy, the directionlet has a detachable filter and critical structure and is able to be fully reconstructed. Thus, theoretically, it has more advantages than the general wavelet transform and the other second-generation wavelet transform [6].

In recent years, inspired by the ideas of 'sparse’ approximation, a novel theory called compressed sensing (CS) has been developed [7–9]. The CS principle claims that if a signal is compressive or sparse in a certain transform domain, it can be projected onto a low-dimensional space using a measurement matrix which is irrelevant with transform basis while still enabling reconstruction at high probability from these small numbers of random linear measurements via solving an optimization problem. Therefore, it is expected to provide a new idea for image fusion by combined directionlets with CS.

This article proposes a new scheme for image fusion; in our scheme, directionlet transform first decomposes each source image into two components, i.e., dense and sparse components. Then, the dense components are fused by the selection method according to the manifestations of defocus, while the sparse components are fused under the frame of CS via fusing a few linear measurements by solving the problem of *l*_{1} norm minimization which is based on the two-step iterative shrinkage reconstruction algorithm. The proposed fusion scheme is applied to infrared and visible image fusion experiments, and the performance is evaluated in terms of computational efficiency, visual quality, and quantitative criterion.

The test results show that it can blend the edges of the image information fairly well, and subjectively more in line with human visual characteristics and the objective evaluation is also superior to other image fusion methods.

## 2. Directionlet transform

The directionlet transform proposed by the German researcher Vladan is the multi-directional anisotropy based upon the integer lattice [10–12]. It adopts multi-directional anisotropy basis functions; therefore, it has more advantages in expressing the image than the average wavelet transform. At the same time, it only uses the one-dimensional filter banks with separable filtering and critical structures and can be reconstructed totally; thus, as far as the computational complexity is concerned, it has more advantage than the other second-generation wavelet transform. The directionlet transform is a new multi-scale analysis tool.

When using one-dimensional filter banks to conduct multi-directional two-dimensional separable wavelet transform, we select any two rational slope *r*_{1} = *b*_{1}/*a*_{1} and *r*_{2} = *b*_{2}/*a*_{2}'s digital line direction to filtering and down-sampling; however, when the critical sampling is enhanced, two digital lines will have the issue of direction of mutual inductance, that is, along the slope *r*_{1} and *r*_{2}, the concept of the digital line cannot provide a systematic rule for the down-sampling of the repeated filtering and repeated sampling.

*r*

_{1}=

*b*

_{1}/

*a*

_{1}and

*r*

_{2}=

*b*

_{2}/

*a*

_{2}'s directions in grid space

*z*

^{2}, expressed in matrix as

*r*

_{1}of the vector

*r*

_{1}is called the change of direction; the direction along the slope

*r*

_{2}of the vector

*d*

_{2}is called the queue direction. Along the skewed collinear transform of the transformation of the lattice in the queue application, it has

*n*

_{1}and

*n*

_{2}(

*n*

_{1}≠

*n*

_{2}) transformation in an iterative step along the transform direction and queue direction. Marked as

*S*- AWT(

**M**

_{ Λ },

*n*

_{1},

*n*

_{2}). From

**M**

_{ Λ }, the integer lattice

*Λ*can be ascertained. According to the case theory,

*z*

^{2}has been divided into the |det

**M**

_{ Λ }|'s co-set which is about the entire integer lattice

*Λ*. Filtering and down-sampling have been conducted in every co-set and then the remaining pixels belong to the lattice

*Λ*′ of the integer lattice

*Λ*and the matrix generated accordingly. Thereout, the sparse representation of the anisotropic object on the direction of the image can be obtained. The principle is shown in Figure 1 (the change of direction in the figure is 45°).

The image which has gone through the above mentioned directionlet transform has a very sparse coefficient and then can obtain more directional information, which can be better used to describe the edge contour of the infrared image.

## 3. Compressive sensing and image fusion

Compressive sensing enables a sparse or compressible signal to be reconstructed from a small number of non-adaptive linear projections, thus significantly reducing the sampling and computation costs [13]. CS has many promising applications in signal acquisition, compression, and medical imaging. In this paper, we investigate its potential application in the image fusion.

*x*is concerned, it can be viewed as a

*R*

^{ N }space

*N*× 1 dimensional column vector, and the element is

*x*[

*n*],

*n*= 1, 2, …,

*n*. If the signal is sparse

*K*, it can be shown as the following formula:

where *ψ* is the *N* × *N* matrix and *s* is the coefficient component column vector of dimension *N* × 1.

*x*in the base of

*ψ*has only non-zero coefficients of

*K*< <

*N*(or greater than zero coefficients),

*ψ*is called the sparse base of the signal

*x*. The CS theory indicates that if the signal

*x*'s (the length is

*N*) transform coefficient which is at an orthogonal basis

*ψ*is sparse (that is, only a small number of non-zero coefficients can be obtained), if these coefficients are projected into the measurement basic

*ϕ*which is irrelevant to the sparse base

*ψ*, the

*M*× 1 dimensional measurement signal

*y*can be obtained. By this approach, the signal

*x*'s compressed sampling can be realized. The expression can be expressed as

*ϕ*is the measurement matrix of

*M*×

*N*and

*Θ*=

*ϕψ*is the

*M*×

*N*matrix and called the projection matrix.

*y*is the measurement value of the projection matrix

*Θ*, which is relevant to the sparse signal

*s*. Only when the orthogonal basis

*ψ*is irrelevant to the measurement matrix

*ϕ*, that is to say, the projection matrix can satisfy the requirement of restricted isometry property (RIP), the signal

*x*can be accurately recovered via these measured value by solving formula (3) in the best optimized way. The block diagram derived from the CS theory for the field of image processing is shown in Figure 1.

The advantage that the CS theory has is that the data obtained via the projection measurement is much smaller than the conventional sampling methods, breaking the bottleneck of the Shannon sampling theorem, so that the high-resolution signal acquisition becomes possible. The attraction of CS theory is that it is for applications in many fields of science and engineering and has important implications and practical significance, such as statistics, information theory, coding theory, computer science theory, and other theories.

Compared with the traditional fusion algorithms, the CS-based image fusion algorithm theory has shown significant superiority: the image fusion can be conducted in the non-sampling condition of the image with the CS technique, the quality of image fusion can be improved by increasing the number of measurements, and this algorithm can save storage space and reduce the computational complexity. The main ideas of the CS-based image fusion algorithm theory are as follows: first of all, the two images which need to deal with should undergo the directionlet transform, the sparse matrix can be obtained after the directionlet coefficients are processed with the sparse treatment, then the fusion rules for the sparse matrix integration are determined, compressive sampling through random sampling matrix is obtained, and finally, the fused image can be obtained in the best optimized way.

The practical function of wavelet transform is the signal decorrelation, and all the information of the signal are concentrated into the wavelet coefficients with large amplitude. These large wavelet coefficients contain far more energy than that contained in small coefficient so that in the reconstruction of the signal, a large coefficient is more important than the smaller one.

where *D*_{
f
} is the fusion wavelet coefficient, *D*_{
M
} is the wavelet coefficient whose absolute value is the largest of the wavelet coefficients in the same location in different images, and *I* is the number of the source image.

The directionlet transform is used to deal with the source image; directionlet coefficients are obtained with the sparse treatment: the small coefficient (or coefficient of close to zero) is set to zero to obtain an approximate sparse coefficient matrix.

When the source image is conducted via sparse transformation, the wavelet is used as the sparse basis. To reconstruct the image with less measurement value, we must ensure that the sparse basis *ψ* and the measurement matrix *ϕ* are irrelevant, because any random sparse matrix has superiority that it is irrelevant to any sparse basis. That is the reason why it can be used as a measurement measure matrix.

- 1.
For each

*m*×*n*pixel image, conduct directionlet transform to obtain the directionlet coefficient matrix. - 2.
The directionlet coefficients are processed with the sparse treatment and then fused according to the larger absolute value rule.

- 3.
For the fused directionlet coefficients, the random matrix is selected as the measurement matrix ϕ; after the measurement, the measured value

*y*can be obtained. - 4.
By solving the linear programming of the

*l*_{1}norm, the approximate solution $\widehat{x}$ can be acquired. - 5.
Conduct the inverse transform to the obtained directionlet coefficients and thus the fusion image can be acquired.

## 4. Experimental results and analysis

**Comparison of statistical parameters about fusion results according to different fusion rules**

Entropy | Cross entropy | Standard deviation | Average gradient | |
---|---|---|---|---|

LP | 12.551 | 0.708 | 14.112 | 28.410 |

DWT | 12.689 | 0.917 | 14.978 | 28.341 |

CS | 12.974 | 0.961 | 15.201 | 29.134 |

As can be seen from Figure 3, the mutual information values of the compressive sensing image fusion algorithm are the best among the three fusion methods.

## 5. Conclusions

The paper put forward a fusion algorithm based on the compressed sensing. Compared with the traditional wavelet transform, the proposed CS-based image fusion algorithm can preserve the image feature information, enhance the fused image space detail representation ability, and improve the fused image information. The experiment proves that the approach in this paper is better than the wavelet transform and Laplace pyramid decomposition, etc.

## Declarations

### Acknowledgements

The authors are grateful to the anonymous referees for the constructive comments.

## Authors’ Affiliations

## References

- Zhou X, Liu R-A, Chen J:
*Infrared and visible image fusion enhancement technology based on multi-scale directional analysis. IEEE Computer Society.*Piscataway: IEEE; 2009.Google Scholar - Hall DL, Linas J: An introduction to multisensor data fusion.
*Proceedings of the IEEE*1997, 85(10):6-23.View ArticleGoogle Scholar - Toet A, van Ruyven LJ, Velaton JM: Merging thermal and visual images by a contrast pyramid.
*Optical Engineering*1989, 28(7):789-792.View ArticleGoogle Scholar - Yonghong J: Fusion of landsat TM and SAR images based on principal component analysis.
*Remote Sensing Technology and Application*1998, 13(1):4649-4654.Google Scholar - Lin YC, Liu QH: An image fusion algorithm based on directionlet transform.
*Nanotechnology and Precision Engineering*2010, 8(6):565-568.Google Scholar - Velisavljevic V, Beferull-Lozano B, Vetterli M: Directionlets: anisotropic multi-directional representation with separable filtering.
*IEEE Transactions on Image Processing*2006, 15(7):1916-1933.MathSciNetView ArticleGoogle Scholar - Jin Wei F, Ran-di YM: Multi-focus fusion using dual-tree contourlet and compressed sensing.
*Opto-Electronic Engineering*2011, 38(4):87-94.Google Scholar - Candes E, Wakin MB: An introduction to compressive sampling.
*IEEE Signal Processing Magazine*2008, 48(4):21-30.View ArticleGoogle Scholar - Provost F, Lesage F: The application of compressed sensing for photo-acoustic tomography.
*IEEE Trans. on Med. Imaging*2009, 28(4):585-594.View ArticleGoogle Scholar - Velisavljevic V: Low-complexity iris coding and recognition based on directionlets.
*IEEE Transactions on Information Forensics and Security*2009, 4(3):410-417.View ArticleGoogle Scholar - Velisavljevic V, Beferull-Lozano B, Vetterli M: Space-frequency quantization for image compression with directionlets.
*IEEE Transactions on Image Processing*2007, 16(7):1761-1773.MathSciNetView ArticleGoogle Scholar - Velisavljevic V, Beferull-Lozano B, Vetterli M:
*Efficient image compression using directionlets. 2007 6th International Conference Information, Communications & Signal Processing.*Piscataway: IEEE; 2007:1-5.Google Scholar - Wan T, Canagarajah N, Achim A:
*Compressive image fusion. IEEE International Conference on Image Processing.*Piscataway: IEEE; 2008:1308-11.Google Scholar - Huang XS, Dai QF, Cao YQ: Compressive sensing image fusion algorithm based on wavelet sparse basis.
*Application Research of Computers*2012, 29(9):3581-3583.Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.