 Research
 Open access
 Published:
Infrared and visible image fusion based on nonlinear enhancement and NSST decomposition
EURASIP Journal on Wireless Communications and Networking volume 2020, Article number: 162 (2020)
Abstract
In multiscale geometric analysis (MGA)based fusion methods for infrared and visible images, adopting the same representation for the two types of images will result in the nonobvious thermal radiation target in the fused image, which can hardly be distinguished from the background. To solve the problem, a novel fusion algorithm based on nonlinear enhancement and nonsubsampled shearlet transform (NSST) decomposition is proposed. Firstly, NSST is used to decompose the two source images into low and highfrequency subbands. Then, the wavelet transform (WT) is used to decompose highfrequency subbands to obtain approximate subbands and directional detail subbands. The “average” fusion rule is performed for fusion for approximate subbands. And the “maxabsolute” fusion rule is performed for fusion for directional detail subbands. The inverse WT is used to reconstruct the highfrequency subbands. To highlight the thermal radiation target, we construct a nonlinear transform function to determine the fusion weight of lowfrequency subbands, and whose parameters can be further adjusted to meet different fusion requirements. Finally, the inverse NSST is used to reconstruct the fused image. The experimental results show that the proposed method can simultaneously enhance the thermal target in infrared images and preserve the texture details in visible images, and which is competitive with or even superior to the stateoftheart fusion methods in terms of both visual and quantitative evaluations.
1 Introduction
Image fusion technology, which aims to combine images obtained from different sensors to create a single and rich fused image [1], has been widely used in medical imaging [2, 3], remote sensing [4,5,6], object recognition [7, 8], and detection [9]. Among the combination of different types of images, infrared and visible image fusion has attracted increasing attention [10]. The infrared images record the thermal radiation of the scene, thus, the target in the infrared images is prominent and obvious. However, the infrared images have less detail information, low contrast, poor visual effects, and poor imaging performance. In contrast, the visible images can provide abundant detail information, while the target will be inconspicuous and easily influenced by smoke, bad weather conditions, and other factors. Therefore, fusion of the two types of the images can compensate for the insufficient imaging competence of infrared and visible sensors [11]. The final fused image can possess clearer scene information as well as better target characteristics [12].
There are seven main fusion methods: multiscale geometric analysis (MGA)based, sparse representationbased [13,14,15], neural networkbased [16, 17], subspacebased [18], saliencybased methods [19], hybrid models [20], and other methods. Among them, MGAbased methods are the most popular. MGAbased methods assume that the images can be represented by different coefficients in different scale. These methods decompose the source images into low and highcomponent bands, combine the corresponding bands with specific fusion rules, and reconstruct the fused image with the inverse MGA transform [21]. The key to MGAbased methods is the MGA transform, which decides the amount of the useful information that can be extracted from source images and integrated in the fused image. Popular transforms used for decomposition and reconstruction include wavelet transform [22] (WT), wedgelet transform [23], curvelet transform [24, 25], contourlet transform [26], NSCT [27, 28], shearlet transform [29] (ST), nonsubsampled shearlet transform [30] (NSST), and so on. Due to the characteristics of shiftinvariant, high sensitivity, strong directivity, fast operation speed, and multidirectional processing, NSST has been widely used in the image fusion [31]. Many researches have shown that NSST is more consistent with human visual characteristics than other MGA transforms, and the performance can make the fused images have better visual effects [32]. However, it may be inappropriate for the infrared and visible image fusion. In infrared images, the target information is significant and easy to detect and recognize. While in visible images, the detailed information is mainly provided by gradients. Therefore, adopting the same representation for the two types of images will cause the thermal radiation target inconspicuous, which can hardly be distinguished from the background. In MGAbased fusion methods, it is difficult to keep the thermal radiation in infrared images and appearance information in visible images simultaneously.
To overcome the problem, we proposed a new fusion algorithm based on nonlinear enhancement and NSST decomposition for the infrared and visible images. Firstly, the NSST is used to decompose the two source images into low and highfrequency subbands. Then, the highfrequency subbands are fused with WTbased method. To highlight the target, we construct a nonlinear transform function to determine the fusion weight of lowfrequency subbands, and whose parameters can be further adjusted to meet different fusion requirements. Finally, the inverse NSST is used to reconstruct the fused image. The experiments demonstrate that the proposed method can not only enhance the thermal target in infrared images, but also preserve the texture details in visible images. The presented method is competitive with or even superior to other methods in terms of both visual and quantitative evaluations.
The rest of this paper is organized as follows. The principle theoretical base and implementation steps of NSST are reviewed in Section 2. The details of the proposed image fusion method are proposed in Section 3. Experimental results and comparisons are presented in Section 4. The main conclusion of this paper is drawn in Section 5.
2 Related works
NSST is one of the most suitable multiscale geometric analysis tools for fusion applications. The NSST provides an elegant sparse image representation with edges and much detail information. It does not introduce artifacts or noise when the inverse NSST is performed. In addition, the shearlet coefficients are welllocalized in tight frames ranging at various locations, scales with anisotropic orientation. This achieves a successful fusion process and produces higher image quality and more clearness of image details and edges [33].
2.1 Basic principle of NSST
The shearlet construction is based on the nonsampled pyramid filter banks that provide the multiscale decomposition and directional filtering generated using shear matrix that provides multidirectional localization. When the dimension n = 2, the affine system with synthetic expansion is A_{AB}(ψ) [10].
where ψ ∈ L^{2}(R^{2}), A and B are 2 × 2 invertible matrices and detB = 1. If A_{AB}(ψ) forms a Parseval tight framework for L^{2}(R^{2}), the elements of the system are called composite wavelets. For any f ∈ L^{2}(R^{2}), there is
Among them, matrix A^{j} and B^{l}are respectively associated with scale and geometric transformations, such as rotation and shear operations.
Where \( {A}_a=\left(\begin{array}{cc}a& 0\\ {}0& \sqrt{a}\end{array}\right) \), \( {B}_s=\left(\begin{array}{cc}1& s\\ {}0& 1\end{array}\right) \), the system can be shown as follows:
Equation (3) is a shearlet system, and ψ_{ast}(x) is a shearlet.
Figure 1 shows the tiling of the frequency plane induced by the shearlets and frequency supports of shearlet elements. It can be seen from Fig. 1 that each element \( {\hat{\psi}}_{j,l,k}(x) \) is supported on a pair of trapezoidal pairs with the size of about 2^{j} × 2^{2j}, and the direction is along a straight line with a slope of l2^{−j}.
2.2 Implementation steps
The NSST can be realized through two steps:
(1) Multiscale decomposition. The nonsubsampled pyramid (NSP) filter bank decomposes each source image into a set of high and lowfrequency subimages to attain multiresolution decomposition. Firstly, the source image is decomposed into the low and highfrequency coefficients with NSP. Then, the NSP decomposition of each layer will iterate on the lowfrequency components obtained by the upper layer decomposition to get the singular points. Without the downsampling operation, the subband images will have the same size as the source image. Finally, for j level decomposition, we can obtain a lowpass image and j bandpass images.
(2) Directional localization. The shearlet filter bank decomposes these highfrequency subimages to attain multidirection decomposition. Firstly, the pseudo polarization coordinates are mapped to Cartesian coordinates. Then, the “Meyer” wavelet is used to construct window function and generate shearlet filters. Finally, the subband image is convoluted with “Meyer” window function to obtain the direction subband images.
The twolevel decomposition structure is shown in Fig. 2. The NSP decomposes the source image f into a lowpass filtered image \( {f}_a^1 \) and a high pass filtered image \( {f}_d^1 \). In each iteration, the NSP decomposes the lowpass filtered image from the upper layer until the specified number of decomposition layers is reached. Finally, a lowpass lowfrequency image and a series of highfrequency images are obtained.
3 Proposed method
In this section, we introduce the process of the proposed method and discuss the setting of parameters. The low and highfrequency components obtained from the NSST decomposition represent different feature information. For example, the lowfrequency components carry the approximate features of the source image, and the highfrequency components carry the detailed features. The approximate parts of images provide more visually significant information and contrast information. The detailed parts of images provide more contour and edges information. Therefore, we should use different fusion rules to fuse the low and highfrequency components. According to the stage of image data to be fused and the degree of information extraction in the fusion system, image fusion is divided into three levels: pixel level, feature level, and decision level. The proposed method focuses on the pixel level. The specific fusion scheme is shown in Fig. 3. The steps of proposed method are as follows:

Step 1: Decompose the infrared and visible images with NSST into low and highfrequency coefficients.

Step 2: Fuse lowfrequency coefficients based on nonlinear enhancement algorithm.

Step 3: Fuse highfrequency coefficients based on WTbased method.

Step 4: Apply inverse NSST to obtain the fused image.
3.1 Lowfrequency subband fusion
The lowfrequency components reflect the contour information of the image, which contain a lot of energy information of the original image [34]. Weighted average method is commonly used to fuse lowfrequency subbands; however, unreasonable fusion weight will cause loss of source image information or poor image performance. We introduce a fusion strategy that construct a nonlinear transform function to determine the fusion weight of the lowfrequency subbands to address the problems.
In infrared images, the target information is significant. Due to the large gray values, the target is easy to detect and recognize. In order to highlight the target in the fused image, we extract the coefficients in the lowfrequency component of the infrared image to determine the lowfrequency fusion weight.
Each coefficient of the lowfrequency components takes the absolute value as follows:
Where LFC_{IR} represents the lowfrequency subband of the infrared image after decomposition, R represents the significant infrared characteristic distribution. R_{mean} means the average of the LFC_{IR}. When R is larger than R_{mean}, it can be considered as a bright point; when R is smaller than R_{mean}, it can be considered as a dark point. The bright points are regarded as the target, while the dark points are regarded as backgrounds. In order to highlight the target, a nonlinear transform function is introduced to control the degree of the enhancement. The nonlinear transform function is as follows:
where the parameter λ belongs to (0, ∞).
The lowfrequency information fusion weight can be expressed as:
Where C_{IR} is the fusion weight of the infrared image, C_{VIS} is the weight of visible image, and they both belong to [0, 1].
As shown in Eqs. 5–7, the parameter λ directly affects the fusion weight of the infrared image. Therefore, we can adjust λ to control the proportion of the infrared features of the fused image. Particularly, the larger the value of C_{IR}, the more obvious the target is. To strengthen the thermal radiation target, the value of C_{IR} should be relatively large.
The final lowfrequency subband fusion result can be obtained as follows:
where LFC _ F represents the lowfrequency component of the fused image. LFC_{VIS} represents the lowfrequency component decomposed by visible images.
3.2 Highfrequency subband fusion
Highfrequency components reflect detailed information, such as edges and contours of the source image. To obtain more detailed information, we use the WTbased method to fuse the high frequency subbands of the infrared and visible images. Firstly, the WT is used to decompose highfrequency subbands to obtain approximate subbands (LFC_{IR} and LFC_{VIS}) and directional detail subbands (HFC_{IR} and HFC_{VIS}). Here, Haar wavelet is selected as the WT basis, and the decomposition layers are set to 1. Then, the “average” fusion rule is performed for fusion for approximate subbands. The approximate subband fusion rule is defined as follows:
And the “maxabsolute” fusion rule is performed for fusion for directional detail subbands. The directional detail subband fusion rule can be expressed as follows:
where LFC _ F and HFC _ F represent the approximate and directional detail subbands of highfrequency subband images.
Finally, the inverse WT is implemented on LFC _ F and HFC _ F to get the highfrequency subbands of the fused image.
3.3 Analysis of parameter
In the nonlinear enhancement method, there is a main parameter which influence the enhancement performance, namely, the parameter λ. In this section, we draw the curve of the enhancement weight C_{IR} under different parameter λ shown in Fig. 4. The intensity of the target pixel in the fused image is determined by the value of C_{IR}. The larger the value of C_{IR}, the more evident the target is.
As shown in Fig. 4, the C_{IR} curve with the the abscissa R (the gray level of the pixel) is “S” type, which shows that the target pixels can obtain larger enhancement than that of the background pixels. Moreover, the shape of C_{IR} becomes steep when the parameter λ increases. Therefore, it is convenient to adjust λ to get different fusion result.
Figure 5 shows the fused images under the parameter λ of 5, 10, 30, 50, 100, and 200. As seen in Fig. 5, the pixel intensity distribution of infrared images is strengthened with the increase of λ. However, when λ reaches a certain degree, the distortion in the fused image will occur. The parameter λ should be appropriately large to meet different fusion requirements. In this paper, the value of λ is 10. The proposed algorithm is summarized as Table 1.
4 Experimental results and discussion
4.1 Experimental scheme
To evaluate the performance of the proposed algorithm, two groups of simulation experiments have been carried out. Firstly, we compare the proposed method with six MGAbased methods. Then, we compare our method with other five advanced methods. Finally, qualitative and quantitative analysis of experimental results is achieved. The infrared and visible images to be fused are collected from TNO Image Fusion Dataset. Our experiments are performed using MATLAB Code on a computer with 2.6 Hz Intel Core CPU and 4 GB memory.
4.2 Fusion quality evaluation
4.2.1 Subjective evaluation
The subjective evaluation methods assess the quality of the fused image according to the evaluator’s own experience and feeling. To some extent, it is a relatively simple, direct, fast, and convenient method. However, the lower efficiency and poorer realtime performance limit its practical applications. Table 2 shows the common used subjective evaluation criteria.
4.2.2 Objective evaluation
According to the different subjects, the objective evaluation indicators of image fusion quality can be divided into three categories: the characteristics of the fusion image itself, the relationship between the fusion image and the standard reference image, and the relationship between the fusion image and the source images [10]. We use A, B, and F to infrared, visible, and fused image, respectively, and R to be the ideal reference image. Here are the five objective evaluation parameters we used.

(1)
Entropy (E)
E can be directly used to measure the richness of image information. The larger the E value, the better the fusion effects are. The calculation formula is shown in Eq. (11):
where L is the total number of gray levels of the image, and p_{i} is the probability with the gray value i in the image.

(2)
Average gradient (AG)
AG is used to reflect the microdetail contrast and texture variation in the image. The larger the AG value, the more gradient information the fused image contains. The calculation formula is shown in Eq. (12):
where ΔF_{x} is the difference in the x direction of the fused image F, and ΔF_{y} is the difference in the y direction.

(3)
Standard deviation (SD)
SD is used to reflect the distribution of pixel gray values and the contrast of the fused image. It is defined as follows:

(4)
Spatial frequency (SF)
SF is used to reflect the overall activity of the image in the spatial domain. The solution of SF is defined in Eq. (16). The larger the SF, the better the fusion effects are.
where RF and CF are the row and column frequency of image respectively.

(5)
Edge information retention (Q^{AB/F})
Q^{AB/F} measures the amount of edge information that is transferred from the source image to the fused image. Q^{AB/F} is defined as follows:
w^{A} and w^{B} denote the weight of the importance of infrared and visible images to the fused image. Q^{AF} and Q^{BF} are calculated from the edges. A large Q^{AB/F}means that considerable edge information is transferred to the fused image. For a perfect fusion result, Q^{AB/F} is 1.
4.3 Experiments and results
4.3.1 Comparison with MGAbased methods
In the first group of simulation tests, we used the presented method to fuse five typical infrared and visible images in the TNO datasets, namely, “Men in front of house,” “Bunker,” “Sandpath,” “Kaptein_1123,” and “barbed_wire_2”. In addition, six MGAbased methods are selected for comparison experiments, including WT [23], TEMST [35], NSST with weighted average [36], NSST with WT [37], NSCT with WT [38], and CURV with WT [39].
The key of MGAbased fusion schemes is the selection of the transforms. WT and CURVbased methods have block artifacts, reduce the contrast of the image, and cannot capture abundant directional information of images. NSCTbased method can capture the geometry of image edges well, while the number of the directions at every level is fixed. In NSSTbased methods, the number of the directions can be set arbitrarily, and thus the more detailed information can be obtained. But the more directions, the longer running time is. We replaced LP with NSST in TEMST as a comparative experiment.
In the proposed method, the pyramid filter for NSST is set as “maxflat,” the decomposition level of NSST is set for 3, and the number of the directions is set for {4,4,4}. The highfrequency subbands are decomposed into 1 level by WT (with the basis of Harr). The results are shown in Fig. 6. The first two rows in Fig. 6 show the infrared and visible images. The six remaining rows denote the fused images of our method, TEMST, NSST with weighted average, WT, NSST with WT, NSCT with WT, and CURV with WT. The subjective and objective evaluation parameters introduced earlier are used to analyze the fusion results.
The above five assessment indicators (i.e., E, AG, SD, SF, and Q^{AB/F}) on the five typical infrared and visible images are shown in Fig. 7. The larger their values, the better the fusion effects are.
4.3.2 Comparison with the stateoftheart methods
In this part, seven typical infrared and visible images in the TNO datasets (i.e., men in front of house, bunker, soldier_behind_smoke_1, Nato_camp_sequence, Kaptein_1123, lake, and barbed_wire_1) are chosen to evaluate the effectiveness of the proposed method. We compare the proposed method with other 5 advanced methods, including: guided filteringbased weighted average technique (GF) [40], multiresolution singular value decomposition (MSVD) [41], fourth order partial differential equations (FPDE) [42], different resolutions via total variation (DRTV) [43], and visual attention saliency guided joint sparse representation (SGJSR) [44].
The fused images are shown in Fig. 8. The values of the five evaluations metrics on the seven infrared and visible images are shown in Fig. 9.
4.3.3 Results and discussion
As seen in Figs. 6, 7, 8, and 9, the above 12 methods can implement the effective fusion of infrared and visible images. In the other MGAbased methods, the fused image is dark and the target is not prominent, which can be clearly seen from the sky in images “Men in front of house” and “Kaptein_1123” in Fig. 6. It can be seen that the proposed method can achieve apparently and easily identifiable target information. In terms of objective evaluation parameters, our proposed method is generally higher than other methods as seen in Fig. 7. In short, the presented method in this paper is superior to other MGAbased methods.
Compared with five advanced methods, the presented method can achieve the best visual quality as shown in Fig. 8. However, analyzing the objective evaluation parameters (i.e., E, AG, SD, SF, and Q^{AB/F}) as seen in Fig. 9, there is a fluctuation. Our method cannot always get the highest values, but it can get the more stable image quality. In all, our method is competitive with the five advanced fusion methods.
5 Conclusions
In this study, we propose a new fusion algorithm for infrared and visible images based on nonlinear enhancement and NSST decomposition. It can be demonstrated that this algorithm can not only retain the texture details of the visible image, but also highlight the targets in the infrared image. Compared with other MGAbased and advanced algorithms, it is competitive or even superior in terms of qualitative and quantitative evaluation. And the fusion performance is beneficial for target detection and tracking in complex environments.
Availability of data and materials
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
Abbreviations
 MGA:

Multiscale geometric analysis
 NSST:

Nonsubsampled shearlet transform
 WT:

Wavelet transform
 CURV:

Curvelet transform
 NSCT:

Nonsubsampled contourlet transform
 TEMST:

Targetenhanced multiscale transform decomposition
 GF:

Guided filteringbased weighted average technique
 MSVD:

Multiresolution singular value decomposition
 FPDE:

Fourth order partial differential equations
 DRTV:

Different resolutions via total variation
 SGJSR:

Visual attention saliency guided joint sparse representation
References
W. Liu, Z. Wang, A novel multifocus image fusion method using multiscale shearing nonlocal guided averaging filter [J]. Signal Process. (2020). https://doi.org/10.1016/j.sigpro.2019.107252
S.M. Darwish, Multilevel fuzzy contourletbased image fusion for medical applications [J]. IET Image Process. 7(7), 694–700 (2013)
P.H. Venkatrao, S.S. Damodar, HWFusion: Holoentropy and SPWhale optimisationbased fusion model for magnetic resonance imaging multimodal image fusion [J]. IET Image Process. 12(4), 572–581 (2018)
X. Wei, Adaptive remote sensing image fusion under the framework of data assimilation [J]. Opt. Eng. 50(6), 067006 (2011)
G. Simone, A. Farina, F.C. Morabito, et al., Image fusion techniques for remote sensing applications [J]. Information Fusion 3(1), 3–15 (2002)
W. Li, X. Hu, J. Du, et al., Adaptive remotesensing image fusion based on dynamic gradient sparse and average gradient difference [J]. Int. J. Remote Sens. 38(23), 7316–7332 (2017)
R. Raghavendra, C. Busch, Novel image fusion scheme based on dependency measure for robust multispectral palmprint recognition [J]. Pattern Recogn. 47(6), 2205–2221 (2014)
R. Singh, M. Vatsa, A. Noore, Integrated multilevel image fusion and match score fusion of visible and infrared face images for robust face recognition [J]. Pattern Recogn. 41(3), 880–893 (2008)
J. Han, B. Bhanu, Fusion of color and infrared video for moving human detection [J]. Pattern Recogn. 40(6), 1771–1784 (2007)
Z. Zhou, M. Dong, X. Xie, et al., Fusion of infrared and visible images for nightvision context enhancement [J]. Appl. Opt. 55(23), 6480 (2016)
M. Ding, L. Wei, B. Wang, Research on fusion method for infrared and visible images via compressive sensing [J]. Infrared Phys. Technol. 57, 56–67 (2013)
S. Gao, W. Jin, L. Wang, Objective color harmony assessment for visible and infrared color fusion images of typical scenes [J]. Opt. Eng. 51(11), 117004 (2012)
Q. Zhang, Y. Fu, H. Li, et al., Dictionary learning method for joint sparse representationbased image fusion [J]. Opt. Eng. 52(5), 057006 (2013)
M. Wang, Z. Mi, J. Shang, et al., Image fusionbased video deraining using sparse representation [J]. Electron. Lett. 52(18), 1528–1529 (2016)
X. Fengtao, J. Zhang, L. Pan, et al., Robust image fusion with block sparse representation and online dictionary learning [J]. IET Image Process. 12(3), 345–353 (2018)
Kong W, Liu J. Technique for image fusion based on nonsubsampled shearlet transform and improved pulsecoupled neural network [J].Opt. Eng.. 52(1):017001101700112 (2013).
G. Wang, H. Tang, B. Xiao, et al., Pixel convolutional neural network for multifocus image fusion [J]. Information Sciences: An International Journal 433/434, 125–141 (2018)
S. Li, Z. Yao, W. Yi, Frame fundamental highresolution image fusion from inhomogeneous measurements [J]. IEEE Trans. Image Process. 21(9), 4002–4015 (2012)
D.P. Bavirisetti, R. Dhuli, Twoscale image fusion of visible and infrared images using saliency detection [J]. Infrared Phys. Technol. 76, 52–64 (2016)
L. Petrusca, P. Cattin, V. De Luca, et al., Hybrid ultrasound/magnetic resonance simultaneous acquisition and image fusion for motion monitoring in the upper abdomen [J]. Investig. Radiol. 48(5), 333–340 (2013)
W. Kong, Technique for grayscale visual light and infrared image fusion based on nonsubsampled shearlet transform [J]. Infrared Phys. Technol. 63, 110–118 (2014)
Z. Zhou, M. Tan, Infrared image and visible image fusion based on wavelet transform [J]. Adv. Mater. Res. 756759(2), 2850–2856 (2013)
D.L. Donoho, Wedgelets: nearly minimax estimation of edges [J]. Ann. Stat. 27(3), 859–897 (1999)
F.E. Ali, I.M. ElDokany, A.A. Saad, et al., A curvelet transform approach for the fusion of MR and CT images [J]. J. Mod. Opt. 57(4), 273–286 (2010)
L. Guo, M. Dai, M. Zhu, Multifocus color image fusion based on quaternion curvelet transform [J]. Opt. Express 20(17), 18846 (2012)
Do M N, Member, IEEE et al., The contourlet transform: an efficient directional multiresolution image representation [J]. IEEE Trans. Image Process. 14(12), 2091–2106 (2006)
G. Bhatnagar, Q. Wu, Z. Liu, Directive contrast based multimodal medical image fusion in NSCT domain [J]. IEEE Transactions on Multimedia. 15(5), 1014–1024 (2013)
Y. Li, Y. Sun, X. Huang, et al., An image fusion method based on sparse representation and sum modifiedlaplacian in NSCT domain [J]. Entropy 20(7), 522 (2018)
Z. Fan, D. Bi, S. Gao, et al., Adaptive enhancement for infrared image using shearlet frame [J]. J. Opt. 18(8), 085706 (2016)
P. Ganasala, V. Kumar, Multimodality medical image fusion based on new features in NSST domain [J]. Biomed. Eng. Lett. 4(4), 414–424 (2015)
W. Kong, B. Wang, Y. Lei, Technique for infrared and visible image fusion based on nonsubsampled shearlet transform and spiking cortical model [J]. Infrared Phys. Technol. 71, 87–98 (2015)
L. Xu, G. Gao, D. Feng, Multifocus image fusion based on nonsubsampled shearlet transform [J]. IET Image Process. 7(6), 633–639 (2013)
Q. Miao, C. Shi, P. Xu, et al., A novel algorithm of image fusion using shearlets [J]. Opt. Commun. 284(6), 1540–1547 (2011)
Y. Zhang, L. Zhang, X. Bai, et al., Infrared and visual image fusion through infrared feature extraction and visual information preservation [J]. Infrared Phys. Technol. 83, 227–237 (2017)
J. Chen, X. Li, L. Luo, G. Mei, J. Ma, Infrared and visible image fusion based on targetenhanced multiscale transform decomposition [J]. Inf. Sci. (2020). https://doi.org/10.1016/j.ins.2019.08.066
X. Liu, W. Mei, H. Du, Structure tensor and nonsubsampled shearlet transform based algorithm for CT and MRI image fusion [J]. Neurocomputing. 235, 131–139 (2017)
Z. Qu, Y. Xing, Y. Song, An image enhancement method based on nonsubsampled shearlet transform and directional information measurement [J]. Information 9(12), 308 (2018)
Y. Wu, H. Zhang, F. Zhang, et al., Fusion of visible and infrared images based on nonsampling contourlet and wavelet transform [J]. Appl. Mech. Mater. 3360(1200), 1523–1526 (2014)
G.G. Bhutada, R.S. Anand, S.C. Saxena, Edge preserved image enhancement using adaptive fusion of images denoised by wavelet and curvelet transform [J]. Digital Signal Processing 21(1), 118–130 (2011)
S. Li, X. Kang, J. Hu, Image fusion with guided filtering [J]. IEEE Trans. Image Process. 22(7), 2864–2875 (2013)
V.P.S. Naidu, Image fusion technique using multiresolution singular value decomposition [J]. Def. Sci. J. 61(5), 479–484 (2011)
D. P. Bavirisetti, Xiao G, Liu G, “Multisensor image fusion based on fourth order partial differential equations,” 2017 20th International Conference on Information Fusion (Fusion), Xi’an. pp. 19 (2017). doi:10.23919/ICIF.2017.8009719
Du Q, Han X, et al. Fusing infrared and visible images of different resolutions via total variation model [J]. Sensors (Basel, Switzerland). (2018). doi:10.3390/s18113827
B. Yang, S. Li, Visual attention guided image fusion with sparse representation [J]. Optik  International Journal for Light and Electron Optics 125(17), 4881–4888 (2014)
Acknowledgements
The research is supported by the National Natural Science Foundation of China under NO. 61805021 and the Department of Science and Technology of Jilin Province under NO. JJKH20191196KJ.
Funding
This work is supported in part by the Natural Science Foundation of China under NO. 61805021 and in part by the Department of Science and Technology Plan Projects of Jilin Province under NO. JJKH20191196KJ.
Author information
Authors and Affiliations
Contributions
XX is the main writer of this paper. She proposed the main idea and constructed the nonlinear function. LC (the second author) and LC (the third author) completed simulation experiment and compared with other algorithms. XT gives some important suggestions for the simulation. All authors read and approve the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Xing, X., Liu, C., Luo, C. et al. Infrared and visible image fusion based on nonlinear enhancement and NSST decomposition. J Wireless Com Network 2020, 162 (2020). https://doi.org/10.1186/s13638020017746
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13638020017746