 Research
 Open access
 Published:
Compressive sensing imagefusion algorithm in wireless sensor networks based on blended basis functions
EURASIP Journal on Wireless Communications and Networking volumeÂ 2014, ArticleÂ number:Â 150 (2014)
Abstract
Compressive sensing (CS) has given us a new idea at data acquisition and signal processing. It has proposed some novel solutions in many practical applications. Focusing on the pixellevel multisource imagefusion problem in wireless sensor networks, the paper proposes an algorithm of CS image fusion based on multiresolution analysis. We present the method to decompose the images by nonsubsampled contourlet transform (NSCT) basis function and wavelet basis function successively and fuse the images in compressive domain. It means that the images can be sparsely represented by more than one basis function. We named this process as blended basis functions representation. Since the NSCT and wavelet basis functions have complementary advantages in multiresolution image analysis, and the signals are sparser after being decomposed by two kinds of basis functions, the proposed algorithm has perceived advantages in comparison with CS image fusion in wavelet domain which is widely reported by literatures. The simulations show that our method provides promising results.
1 Introduction
Wireless sensor networking is a technology that promises unprecedented ability to monitor and manipulate the physical world via a network of densely distributed wireless sensor nodes [1â€“3]. The nodes can sense the physical environment in a variety of modalities, including image, radar, acoustic, video, seismic, thermal, infrared, etc. [4]. In wireless sensor networks, how to fuse multiple sensed information is very challenging [5]. Information fusion on radar sensor networks has been extensively studied in [6â€“8]. In this paper, we focus on image fusion in wireless sensor networks.
Image fusion is an important issue in the field of digital image processing. Traditional imagefusion algorithms are always difficult for meeting the practical demands of realtime and low bitrate transmission in wireless sensor networks because of their huge amount of calculation. In recent years, compressive sensing has inspired significant interests because of its compressive capability. It gives us great inspiration to balance the relationship between the quality of fused images and the computation complexity.
We focus on the pixellevel fusion problem of infrared and visible images of the same scene. Literature [9] claims that the fused image composed by the images decomposed by two multiresolution basis functions in succession shows better quality than the image fused in a single multiresolution domain. As wavelet function and other multiresolution tools are often used as sparse basis in compressive sensing (CS), it inspires us to apply the idea of blending two multiresolution functions to CS image fusion.
In this paper, in the first place, we provide a brief description of CS and image fusion and a typical model of CS image fusion is proposed. Then, we introduce two kinds of multiresolution analysis tools: nonsubsampled contourlet transform (NSCT) and wavelet transform, since they have good performances in image fusion and their advantages are complementary. In Section 4, we explore the idea of applying blended basis functions to CS domain. For this purpose, blended basis functions and wavelet basis alone are employed to sparsely represent the same image. The image is then reconstructed via orthogonal matching pursuit (OMP) algorithm. The performances of the two methods show that blended basis functions provide a promising result in CS. In Section 5, an imagefusion algorithm is presented in CS domain using blended basis functions. The experiments show that our proposed algorithm achieves better fusion results as well as the reconstruction results in comparison with the waveletbased CS image fusion. Finally, conclusions and suggestions for the future work are given in Section 6.
2 CS and image fusion
2.1 Brief description of CS
In 2006, Donoho D. L. demonstrated that many natural signals which are sparse or compressible can be accurately represented by a set of lowdimensional projections that preserve the structure of the signal; the signal can be then reconstructed from these projections using an optimization process [10]. The theory is now known as compressive sensing.
Natural signals are usually not sparse in time domain. But when we transform the signals into an appropriate basis (wavelet basis for example), most of the coefficients will turn out to be zero or close to zero. That is, the signals will present sparse features in some domain. Consider a realvalued, finitelength, onedimensional signal [11]fâˆˆR^{N}; if it can be represented as a linear combination of a set of standard orthogonal basis, such as:
where Ïˆ is some basis and Î¸ is a vector containing only Kâ€‰â‰ªâ€‰N nonzero coefficients; we can say that f is Ksparse in the domain Ïˆ, and Ïˆ is a sparse basis for the signal f. If the signal is sparse in some domain, it means that it is compressible and it can be well approximated by Ksparse representations.
If a signal is compressible, the compressive measurements can be taken on it, it can be represented as:
where yâˆˆR^{M} and Ï• are an Mâ€‰Ã—â€‰N matrix (Mâ€‰<â€‰N). In CS, Ï• is called measurement matrix. It seems to be an illposed problem to recover the signal f from the measurements y, but the CS theory provides that it is possible to reconstruct the signal through some optimization algorithm. CS presents the method to capture and represent compressible signals at an incredibly low rate.
2.2 Image fusion in CS domain
With the development of the CS theory, CS has been a viable candidate in many practical applications in recent years. It is also an attractive scheme for image fusion. Some literatures have reported their researches on the image fusion in CS domain [12â€“14]. The core idea of these papers can be summarized in Figure 1.It can be seen from Figure 1 that the core idea of applying CS to image fusion is to fuse the measurements of the two input images in CS domain, and then, the composite measurements can be used to reconstruct the fused image by a nonlinear procedure. Wavelet transform as a widely used sparse transform and a traditional image multiresolution analysis tool is often used for image sparse decomposition. So, it is more common to use wavelet as sparse basis in CSbased image fusion. However, wavelet transform does not have the superiority of anisotropy on the presentation of twodimensional signals. So, edges of the images fused by waveletbased algorithm tend to be blurred, which motivates us to explore a new way to combine the advantages of different multiresolution analysis tools in imagefusion process.
3 Introduction to multiresolution analysis tools
In the pixellevel image fusion based on transform domain, the commonly used multiresolution analysis tools are wavelet transform, pyramid transform, contourlet transform, and so on. In this section, two multiresolution analysis methods, wavelet transform and NSCT, are selected for comparative analysis. It can be seen that the two basis functions have their own features and their advantages are complementary.
3.1 Wavelet transform
Wavelet transform is a widely used multiresolution analysis tool. It can decompose the signals into different scales with different levels of resolution by dilating a prototype function. That is to decompose the signals into shifted and scaled versions of the mother wavelet [15]. Any details of the signals can be focused adaptively by wavelet transform, so it is called â€˜digital microscopeâ€™. It also shows good performance in twodimensional signal processing such as image denoising, enhancement, and fusion. However, since the 2D wavelet transform has only limited numbers of direction, it cannot express the highdimensional signals that have line singularity optimally. But line singularity is a typical performance of the edges in natural images. Wavelet transform shows its deficiency in the processing of edge signals.
3.2 Nonsubsampled contourlet transform
Nonsubsampled contourlet transform is proposed based on contourlet transform. It not only has the frequency characteristics of multiresolution, but also has the feature of being anisotropic, so it can have a good grasp of the geometry of images. The basic idea of NSCT is to use the nonsubsampled pyramid decomposition to decompose the image into multiple scales. And then, through the nonsubsampled directional filter bank, the signals of each scale are decomposed into different directional subbands. The number of subbands in each scale can be any power of two. NSCT has no downsampling process in the twostep decomposition, so it has the feature of translation invariant [16]. Since NSCT has the directional characteristics, its advantage of imageedge processing is obvious.
3.3 The idea of blended basis function
Through the above analysis on the characteristics of wavelet transform and NSCT, we can see that the two algorithms have complementary advantages. Literature [9] proposed a novel algorithm that combines two multiresolution analysis functions to fuse the image. The method provides better result than the traditional multiresolutionbased image fusion.
In this paper, we call the process of decomposing signals by two basis functions successively as blended basis functions representation. Considering that blended basis functions have given promising results in multiresolutionbased image fusion, and wavelet basis which is a typical basis function is also widely used in CS, we propose to explore the application of CS image fusion based on blended basis functions.
4 Applying blended basis functions to CS
Blended basis functions are the combination of two multiresolution analysis tools. The two functions, NSCT and wavelet, are cascaded. The image has already been decomposed into multiscales by NSCT before the sparse representation of wavelet. After NSCT decomposition, the signals of each scale have already been sparsely presented to some extent and the highfrequency parts are sparser than the lowfrequency parts. And then, these sparse signals are sparsely represented by wavelet basis. That is, the signals are sparsely represented twice by two kinds of basis functions successively, which can effectively enhance the sparsity of the signals. From the theory of CS, we can see that if the signal is sparser, fewer measurements are needed to reconstruct the signal, or the reconstruction result will be more ideal when fewer amounts of measurements are taken.In order to test the feasibility of applying blended basis functions to CS, the experiment will compare the reconstruction results of blended basis functions as sparse basis with single wavelet basis function as sparse basis. The comparison is performed on an image with the size of 256â€‰Ã—â€‰256. The simulation results on Matlab platform manufactured by MathWorks, Inc. (Natick, MA, USA) are shown in Figure 2. The measurement matrix is a random matrix and OMP algorithm is chosen for the reconstruction algorithm.
In Figure 2, the images in the left group are sparsely represented by wavelet basis, while the images in the other group are sparsely represented by blended basis functions. The images of the same row are recovered from the same number of measurements. From the comparison in the images of the same row, it can be seen obviously that the reconstruction results of the images on the right side are much better than the ones on the left, especially at the edges and details sections. When the images are compared vertically, we can find that with the reduction of the sampling rate M_{rate}, the reconstruction results of the images in the two columns decline gradually, but there is a clear performance improvement by using blended basis functions when fewer measurements are used.
5 Applying blended basis functions to CS image fusion
5.1 The proposed fusion method
In the multiresolution analysis of image signals, the lowfrequency components are not as sparse as the highfrequency components. So we propose to fuse the two kinds of components separately. Firstly, NSCT is employed to decompose the image into multiscales. Then, the highfrequency NSCT components are sparsely represented by wavelet basis, while the lowfrequency parts can be fused in the NSCT domain directly. Since the highfrequency NSCT coefficients have the sparse features, after being sparsely represented again by wavelet transform, their sparsity is enhanced.
The algorithm steps are listed below:

1.
Decompose the two input images by NSCT and divide the coefficients into highfrequency parts and lowfrequency parts according to their layers.

2.
Fuse the lowfrequency components of the two images according to the lowfrequency fusion rule in NSCT domain directly.

3.
Sparsely represent the highfrequency components by wavelet basis.

4.
Obtain the compressed measurements matrix with the sampling rate M _{rate}.

5.
Fuse the measurements of the highfrequency components according to the highfrequency fusion rule in CS domain.

6.
Reconstruct the fused highfrequency components via OMP algorithm and apply inverse wavelet transform on them.

7.
The fused image is obtained by inverse NSCT transform.
5.2 Experiments and results
The two input images used in this experiment are a pair of infrared and visible images which have been registered accurately. In order to maximize the preservation of the thermal radiation information of the infrared image, the lowfrequency components are fused under the rule of powerweighted method. It can be described as:
where f_{ i }^{F}(x,y) is the fused coefficient of level i, f_{ i }^{I}(x,y) and f_{ i }^{V}(x,y) are the ilevel coefficients of the infrared and visible images, Ï‰_{1} is the weight of infrared image, and Ï‰_{2} is the weight of infrared image. Ï‰_{1} and Ï‰_{2} are calculated by Equation 4.
where E_{ i }^{I}(x,y) and E_{ i }^{V}(x,y) are the power of ilevel coefficients of the infrared and visible images.
For the highfrequency components, the fusion process is carried out in CS domain. Considering that the highfrequency components mainly represent the details and edges information, which can be demonstrated well in visible images, the measurements of the highfrequency components are fused by the rule of absolute valueweighted method.
where f_{i,j}^{F}(x,y) is the fused coefficient of level i, direction j and f_{ i,j }^{I}(x,y) and f_{ i,j }^{V}(x,y) are the ilevel, jdirection coefficients of the infrared and visible images.The size of the source image is 256â€‰Ã—â€‰256. The proposed algorithm and CS imagefusion algorithm based on wavelet basis are employed to test the fusion results. The fused images are shown in Figure 3.
The results of the subjective evaluation are obvious, that the proposed algorithm shows better fusion results especially on the presentation of the details and edges. With the reduction of the sampling rate M_{rate}, the qualities of the fused images are getting worse, while the proposed algorithm shows better results at the same sampling rate.
The objective evaluation results in Table 1 confirm our conclusion. Q^{AB/F} is a quality metric for image fusion proposed by Xydeas and Petrovic, which does not require a reference image and correlates well with subjective criteria [17]. The larger the value of Q^{AB/F}, the better the fusion result. The values of Q^{AB/F} show that the proposed algorithm improves the image reconstruction quality obviously.
6 Conclusions
In the paper, we present a feasible imagefusion algorithm in CS domain which can be used in wireless sensor networks. Blended basis functions, two kinds of basis functions used successively, are used to sparsely represent the images. Since the sparsity of the signals can be enhanced and the advantages of the two multiresolution tools are complementary, the proposed algorithm shows promising results in CS domain. The experiments proved that, compared with the widely used CS image fusion based on wavelet, our algorithm shows better performance on the presentation of details and edges.
References
Liang Q, Wang L, Ren Q: Faulttolerant and energy efficient crosslayer design for wireless sensor networks. Int. J. Sens. Netw. 2007, 2(3):248257.
Ren Q, Liang Q: Fuzzy logicoptimized secure media access control (FSMAC) protocol wireless sensor networks. IEEE Computational Intelligence for Homeland Security and Personal Safety 2005, 3743. doi:10.1109/CIHSPS.2005.1500608
Ren Q, Liang Q: Throughput and energyefficiencyaware protocol for ultrawideband communication in wireless sensor networks: a crosslayer approach. IEEE Trans. Mobile. Comput. 2008, 7(6):805816.
Shu H, Liang Q: Fuzzy optimization for distributed sensor deployment. IEEE Wireless. Commun. Netw. Conf. 2005, 3: 19031908.
Liang Q: Situation understanding based on heterogeneous sensor networks and humaninspired favor weak fuzzy logic system. IEEE Syst. J. 2011, 5(2):156163.
Liang Q: Radar sensor wireless channel modeling in foliage environment: UWB versus narrowband. IEEE Sens. J. 2011, 11(6):14481457.
Liang Q: Automatic target recognition using waveform diversity in radar sensor networks. Pattern. Recognit. Lett. 2008, 29(3):377381. 10.1016/j.patrec.2007.10.016
Liang Q, Cheng X, Samn SW: NEW: networkenabled electronic warfare for target recognition. IEEE Trans. Aerosp. Electron. Syst. 2010, 46(2):558568.
Yang B: Researches on Novel Methods for Pixel Level Multisensor Image Fusion. Zhengzhou University of Light Industry, Hunan; 2005:3133.
Donoho DL: Compressed sensing. IEEE Trans. Inf. Theory 2006, 52(4):12891306.
Baraniuk RG: Compressive sensing. IEEE Signal Process. Mag. 2007, 24(4):118124.
Wan T, Qin Z: An application of compressive sensing for image fusion. Int. J. Comput. Math. 2011, 88(18):39153930. 10.1080/00207160.2011.598229
Xiao Xiang Z, Zuan W, Bamler R: Compressive sensing for image fusion  with application to pansharpening. In Geoscience and Remote Sensing Symposium (IGARSS). IEEE International; 2011:27932796. July 2011
Li X, Qin SY: Efficient fusion for infrared and visible images based on compressive sensing principle. Image. Process. 2011, 5(2):141147. 10.1049/ietipr.2010.0084
Mallat SG: A theory for multiresolution signal decomposition: the wavelet representation. Pattern. Anal. Mach. Intell. 1989, 11(7):674693. 10.1109/34.192463
Do MN, Vetterli M: The contourlet transform: an efficient directional multiresolution image representation. IEEE Trans. Image Process. 2005, 14(12):20912106.
Piella G, Heijmans H: A new quality metric for image fusion. Image. Process. 2003, 3: 173176.
Acknowledgments
Thanks for the support by Tianjin Outstanding Young Teachers Program and National Instrument Program:2013yq030915.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authorsâ€™ original submitted files for images
Below are the links to the authorsâ€™ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Tong, Y., Zhao, M., Wei, Z. et al. Compressive sensing imagefusion algorithm in wireless sensor networks based on blended basis functions. J Wireless Com Network 2014, 150 (2014). https://doi.org/10.1186/168714992014150
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/168714992014150