Compressive sensing image-fusion algorithm in wireless sensor networks based on blended basis functions
© Tong et al.; licensee Springer. 2014
Received: 8 July 2014
Accepted: 28 August 2014
Published: 15 September 2014
Compressive sensing (CS) has given us a new idea at data acquisition and signal processing. It has proposed some novel solutions in many practical applications. Focusing on the pixel-level multi-source image-fusion problem in wireless sensor networks, the paper proposes an algorithm of CS image fusion based on multi-resolution analysis. We present the method to decompose the images by nonsubsampled contourlet transform (NSCT) basis function and wavelet basis function successively and fuse the images in compressive domain. It means that the images can be sparsely represented by more than one basis function. We named this process as blended basis functions representation. Since the NSCT and wavelet basis functions have complementary advantages in multi-resolution image analysis, and the signals are sparser after being decomposed by two kinds of basis functions, the proposed algorithm has perceived advantages in comparison with CS image fusion in wavelet domain which is widely reported by literatures. The simulations show that our method provides promising results.
KeywordsBlended basis functions Compressive sensing NSCT Wavelet transform Image fusion
Wireless sensor networking is a technology that promises unprecedented ability to monitor and manipulate the physical world via a network of densely distributed wireless sensor nodes [1–3]. The nodes can sense the physical environment in a variety of modalities, including image, radar, acoustic, video, seismic, thermal, infrared, etc. . In wireless sensor networks, how to fuse multiple sensed information is very challenging . Information fusion on radar sensor networks has been extensively studied in [6–8]. In this paper, we focus on image fusion in wireless sensor networks.
Image fusion is an important issue in the field of digital image processing. Traditional image-fusion algorithms are always difficult for meeting the practical demands of real-time and low bit-rate transmission in wireless sensor networks because of their huge amount of calculation. In recent years, compressive sensing has inspired significant interests because of its compressive capability. It gives us great inspiration to balance the relationship between the quality of fused images and the computation complexity.
We focus on the pixel-level fusion problem of infrared and visible images of the same scene. Literature  claims that the fused image composed by the images decomposed by two multi-resolution basis functions in succession shows better quality than the image fused in a single multi-resolution domain. As wavelet function and other multi-resolution tools are often used as sparse basis in compressive sensing (CS), it inspires us to apply the idea of blending two multi-resolution functions to CS image fusion.
In this paper, in the first place, we provide a brief description of CS and image fusion and a typical model of CS image fusion is proposed. Then, we introduce two kinds of multi-resolution analysis tools: nonsubsampled contourlet transform (NSCT) and wavelet transform, since they have good performances in image fusion and their advantages are complementary. In Section 4, we explore the idea of applying blended basis functions to CS domain. For this purpose, blended basis functions and wavelet basis alone are employed to sparsely represent the same image. The image is then reconstructed via orthogonal matching pursuit (OMP) algorithm. The performances of the two methods show that blended basis functions provide a promising result in CS. In Section 5, an image-fusion algorithm is presented in CS domain using blended basis functions. The experiments show that our proposed algorithm achieves better fusion results as well as the reconstruction results in comparison with the wavelet-based CS image fusion. Finally, conclusions and suggestions for the future work are given in Section 6.
2 CS and image fusion
2.1 Brief description of CS
In 2006, Donoho D. L. demonstrated that many natural signals which are sparse or compressible can be accurately represented by a set of low-dimensional projections that preserve the structure of the signal; the signal can be then reconstructed from these projections using an optimization process . The theory is now known as compressive sensing.
where ψ is some basis and θ is a vector containing only K ≪ N nonzero coefficients; we can say that f is K-sparse in the domain ψ, and ψ is a sparse basis for the signal f. If the signal is sparse in some domain, it means that it is compressible and it can be well approximated by K-sparse representations.
where y∈RM and ϕ are an M × N matrix (M < N). In CS, ϕ is called measurement matrix. It seems to be an ill-posed problem to recover the signal f from the measurements y, but the CS theory provides that it is possible to reconstruct the signal through some optimization algorithm. CS presents the method to capture and represent compressible signals at an incredibly low rate.
2.2 Image fusion in CS domain
3 Introduction to multi-resolution analysis tools
In the pixel-level image fusion based on transform domain, the commonly used multi-resolution analysis tools are wavelet transform, pyramid transform, contourlet transform, and so on. In this section, two multi-resolution analysis methods, wavelet transform and NSCT, are selected for comparative analysis. It can be seen that the two basis functions have their own features and their advantages are complementary.
3.1 Wavelet transform
Wavelet transform is a widely used multi-resolution analysis tool. It can decompose the signals into different scales with different levels of resolution by dilating a prototype function. That is to decompose the signals into shifted and scaled versions of the mother wavelet . Any details of the signals can be focused adaptively by wavelet transform, so it is called ‘digital microscope’. It also shows good performance in two-dimensional signal processing such as image denoising, enhancement, and fusion. However, since the 2-D wavelet transform has only limited numbers of direction, it cannot express the high-dimensional signals that have line singularity optimally. But line singularity is a typical performance of the edges in natural images. Wavelet transform shows its deficiency in the processing of edge signals.
3.2 Nonsubsampled contourlet transform
Nonsubsampled contourlet transform is proposed based on contourlet transform. It not only has the frequency characteristics of multi-resolution, but also has the feature of being anisotropic, so it can have a good grasp of the geometry of images. The basic idea of NSCT is to use the nonsubsampled pyramid decomposition to decompose the image into multiple scales. And then, through the nonsubsampled directional filter bank, the signals of each scale are decomposed into different directional sub-bands. The number of sub-bands in each scale can be any power of two. NSCT has no down-sampling process in the two-step decomposition, so it has the feature of translation invariant . Since NSCT has the directional characteristics, its advantage of image-edge processing is obvious.
3.3 The idea of blended basis function
Through the above analysis on the characteristics of wavelet transform and NSCT, we can see that the two algorithms have complementary advantages. Literature  proposed a novel algorithm that combines two multi-resolution analysis functions to fuse the image. The method provides better result than the traditional multi-resolution-based image fusion.
In this paper, we call the process of decomposing signals by two basis functions successively as blended basis functions representation. Considering that blended basis functions have given promising results in multi-resolution-based image fusion, and wavelet basis which is a typical basis function is also widely used in CS, we propose to explore the application of CS image fusion based on blended basis functions.
4 Applying blended basis functions to CS
In Figure 2, the images in the left group are sparsely represented by wavelet basis, while the images in the other group are sparsely represented by blended basis functions. The images of the same row are recovered from the same number of measurements. From the comparison in the images of the same row, it can be seen obviously that the reconstruction results of the images on the right side are much better than the ones on the left, especially at the edges and details sections. When the images are compared vertically, we can find that with the reduction of the sampling rate Mrate, the reconstruction results of the images in the two columns decline gradually, but there is a clear performance improvement by using blended basis functions when fewer measurements are used.
5 Applying blended basis functions to CS image fusion
5.1 The proposed fusion method
In the multi-resolution analysis of image signals, the low-frequency components are not as sparse as the high-frequency components. So we propose to fuse the two kinds of components separately. Firstly, NSCT is employed to decompose the image into multi-scales. Then, the high-frequency NSCT components are sparsely represented by wavelet basis, while the low-frequency parts can be fused in the NSCT domain directly. Since the high-frequency NSCT coefficients have the sparse features, after being sparsely represented again by wavelet transform, their sparsity is enhanced.
Decompose the two input images by NSCT and divide the coefficients into high-frequency parts and low-frequency parts according to their layers.
Fuse the low-frequency components of the two images according to the low-frequency fusion rule in NSCT domain directly.
Sparsely represent the high-frequency components by wavelet basis.
Obtain the compressed measurements matrix with the sampling rate M rate.
Fuse the measurements of the high-frequency components according to the high-frequency fusion rule in CS domain.
Reconstruct the fused high-frequency components via OMP algorithm and apply inverse wavelet transform on them.
The fused image is obtained by inverse NSCT transform.
5.2 Experiments and results
where E i I (x,y) and E i V (x,y) are the power of i-level coefficients of the infrared and visible images.
The results of the subjective evaluation are obvious, that the proposed algorithm shows better fusion results especially on the presentation of the details and edges. With the reduction of the sampling rate Mrate, the qualities of the fused images are getting worse, while the proposed algorithm shows better results at the same sampling rate.
Q AB/F of the reconstructed image
Blended basis functions
In the paper, we present a feasible image-fusion algorithm in CS domain which can be used in wireless sensor networks. Blended basis functions, two kinds of basis functions used successively, are used to sparsely represent the images. Since the sparsity of the signals can be enhanced and the advantages of the two multi-resolution tools are complementary, the proposed algorithm shows promising results in CS domain. The experiments proved that, compared with the widely used CS image fusion based on wavelet, our algorithm shows better performance on the presentation of details and edges.
Thanks for the support by Tianjin Outstanding Young Teachers Program and National Instrument Program:2013yq030915.
- Liang Q, Wang L, Ren Q: Fault-tolerant and energy efficient cross-layer design for wireless sensor networks. Int. J. Sens. Netw. 2007, 2(3):248-257.View ArticleGoogle Scholar
- Ren Q, Liang Q: Fuzzy logic-optimized secure media access control (FSMAC) protocol wireless sensor networks. IEEE Computational Intelligence for Homeland Security and Personal Safety 2005, 37-43. doi:10.1109/CIHSPS.2005.1500608Google Scholar
- Ren Q, Liang Q: Throughput and energy-efficiency-aware protocol for ultrawideband communication in wireless sensor networks: a cross-layer approach. IEEE Trans. Mobile. Comput. 2008, 7(6):805-816.View ArticleGoogle Scholar
- Shu H, Liang Q: Fuzzy optimization for distributed sensor deployment. IEEE Wireless. Commun. Netw. Conf. 2005, 3: 1903-1908.Google Scholar
- Liang Q: Situation understanding based on heterogeneous sensor networks and human-inspired favor weak fuzzy logic system. IEEE Syst. J. 2011, 5(2):156-163.View ArticleGoogle Scholar
- Liang Q: Radar sensor wireless channel modeling in foliage environment: UWB versus narrowband. IEEE Sens. J. 2011, 11(6):1448-1457.View ArticleGoogle Scholar
- Liang Q: Automatic target recognition using waveform diversity in radar sensor networks. Pattern. Recognit. Lett. 2008, 29(3):377-381. 10.1016/j.patrec.2007.10.016View ArticleGoogle Scholar
- Liang Q, Cheng X, Samn SW: NEW: network-enabled electronic warfare for target recognition. IEEE Trans. Aerosp. Electron. Syst. 2010, 46(2):558-568.View ArticleGoogle Scholar
- Yang B: Researches on Novel Methods for Pixel Level Multi-sensor Image Fusion. Zhengzhou University of Light Industry, Hunan; 2005:31-33.Google Scholar
- Donoho DL: Compressed sensing. IEEE Trans. Inf. Theory 2006, 52(4):1289-1306.MathSciNetView ArticleMATHGoogle Scholar
- Baraniuk RG: Compressive sensing. IEEE Signal Process. Mag. 2007, 24(4):118-124.View ArticleMathSciNetGoogle Scholar
- Wan T, Qin Z: An application of compressive sensing for image fusion. Int. J. Comput. Math. 2011, 88(18):3915-3930. 10.1080/00207160.2011.598229View ArticleGoogle Scholar
- Xiao Xiang Z, Zuan W, Bamler R: Compressive sensing for image fusion - with application to pan-sharpening. In Geoscience and Remote Sensing Symposium (IGARSS). IEEE International; 2011:2793-2796. July 2011Google Scholar
- Li X, Qin S-Y: Efficient fusion for infrared and visible images based on compressive sensing principle. Image. Process. 2011, 5(2):141-147. 10.1049/iet-ipr.2010.0084View ArticleGoogle Scholar
- Mallat SG: A theory for multi-resolution signal decomposition: the wavelet representation. Pattern. Anal. Mach. Intell. 1989, 11(7):674-693. 10.1109/34.192463View ArticleMATHGoogle Scholar
- Do MN, Vetterli M: The contourlet transform: an efficient directional multi-resolution image representation. IEEE Trans. Image Process. 2005, 14(12):2091-2106.MathSciNetView ArticleGoogle Scholar
- Piella G, Heijmans H: A new quality metric for image fusion. Image. Process. 2003, 3: 173-176.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.