 Research
 Open Access
A novel local and nonlocal total variation combination method for image restoration in wireless sensor networks
 Mingzhu Shi^{1, 2}Email authorView ORCID ID profile and
 Liang Feng^{3}
https://doi.org/10.1186/s136380170951y
© The Author(s). 2017
 Received: 13 July 2017
 Accepted: 27 September 2017
 Published: 11 October 2017
Abstract
In this paper, we propose a novel local and nonlocal total variation combination method for image restoration in wireless sensor networks (WSN), which plays an important role in improving the quality of the transmitted image. First, the degrade image is preprocessed by an image smoothing scheme to divide the image into two regions. One contains edges and flat regions by the local TV term. The other is rich in image details and regularized by the nonlocal TV term. Then, the alternating direction method of multipliers (ADMM) algorithm is adopted to optimize the complex object function, and two key parameters are discussed for better performance. Finally, we compare our method with several recent stateoftheart methods and illustrate the efficiency and performance of the proposed model by experimental results in peak signal to noise ratio (PSNR) and computing time.
Keywords
 Image restoration
 Nonlocal total variation
 Alternating direction minimization of multiplier
 Overlapping group sparsity
1 Introduction
1.1 Problem setup
Here, ∇ is the local gradient operator, and \( {\left\Vert \nabla f\right\Vert}_1=\sum \sqrt{{\left({\nabla}_{(1)}u\right)}^2+{\left({\nabla}_{(2)}u\right)}^2} \). The ∇_{(1)} u and ∇_{(2)} u represent the local firstorder differences of f in the horizontal and vertical directions respectively. The local TV model has been proven to have good performance in preserving edges due to its linear penalty on differences between adjacent pixels. However, it yields to staircase artifacts that smooth image details. Therefore, it is of great importance to model the appropriate prior knowledge from nature images or impose more appropriate prior assumption to constrain the solution. Actually, the underlying motivation in this paper is to establish appropriate regularization terms and improve the efficiency of the numerical algorithm for the complex object function.
1.2 Related works
In recent years, the nonlocal TV has been successfully used in image processing tasks [6, 7]. It uses the whole image pixel information not the adjacent pixel information and combines the variational framework and the nonlocal selfsimilarity constraint to restore the image details. This is the main difference with the local TV model. However, if the nonlocal selfsimilarity constraint is considered as the only constraint, similar image structures still cannot be estimated accurately. When the TV model and the nonlocal selfsimilarity constraint are both used on the entire image, the performance of their method will be compromised under the limitation of the TV model [8]. Besides, since the nonlocal total variation requires weighted difference between pixels in the whole image, it is more time consuming and needs more efficient algorithms. The SplitingBregman method has been proposed to solve the nonlocal TV image restoration problem, but the efficiency is unsatisfactory [9, 10]. It not only needs the outer iteration in the subproblem but also the inner iterations for the nonlocal Laplacian operator. Zhu et al. propose an efficient primaldual hybrid gradient algorithm, which alternates between the primal and dual formulations for total variation [11]. A unified primaldual algorithm framework is proposed to resolve the local total variation problem with L _{1} basis pursuit and TVL _{2} minimization [12]. Bonettini et al. establish the convergence of a general primaldual method for nonsmooth convex optimization problems, whose structure is typical in the imaging framework [13]. In these approaches, many parameters have to be chosen and causes time consuming. To overcome this drawback, an alternating direction minimization method of multipliers (ADMM) has been widely used in recent imageprocessing tasks [14, 15]. Its outstanding performance is that there is no need to resolve the subproblems and no inner iterations. Hence, the problem needs to be solved from two aspects: one is how to choose a good regularization functional φ(f), which is an active research area in image science, and the other is how to shorten the computation time without yielding staircase artifacts, which is also a challenging problem.
The rest of the paper is organized as follows. In Section 2, we introduce the definition of the nonlocal total variation and the principle of the overlapping group sparsity and the ADMM algorithm. They are the essential tools in our method. Section 3 introduces the object function of the proposed model and discusses the parameter selection criteria. In Section 4, we carry out experiments and compare ours with other stateoftheart methods. Finally, we make a conclusion in Section 5.
2 Preliminaries
2.1 Nonlocal total variation
D _{(1)}, D _{(2)} ∈ R ^{ mn × mn } are mn × mn gradient matrices in the vertical and horizontal directions, and we have Df _{ i } = [(D _{(1)} f _{ i }), (D _{(2)} f _{ i })]^{T} for each f ∈ V. By staking the ith rows of D _{(1)} and D _{(2)} together, we get a towrow matrix Df _{ i } ∈ R ^{2 × mn }. Define the global firstorder finite difference operator as D = [(D ^{(1)})^{T}, (D ^{(2)})^{T}]^{T} ∈ R ^{2mn × mn }. We consider Df ∈ Q and assume images in this paper under the periodic boundary condition. The discrete gradient operators are defined by \( {\left({D}_{(1)}f\right)}_{i,j}=\left\{\begin{array}{c}\hfill {f}_{i+1,j}{f}_{i,j}\kern0.75em \mathrm{if}\kern0.75em i<m\hfill \\ {}\hfill {f}_{i+1,j}{f}_{m,j}\kern0.5em \mathrm{if}\kern0.75em i=m\hfill \end{array}\right. \) and \( {\left({D}_{(2)}f\right)}_{i,j}=\left\{\begin{array}{c}\hfill {f}_{i,j+1}{f}_{i,j}\kern0.75em \mathrm{if}\kern0.75em j<n\hfill \\ {}\hfill {f}_{i,1}{f}_{i,n}\kern1.5em \mathrm{if}\kern0.75em j=n\hfill \end{array}\right. \).
The main purpose of the nonlocal regularization is to generalize the local gradient and divergence concepts into the local form. Generally, a reference image is expected as close as possible to the original image to obtain the weights more exactly. However, it is hard to get the price weights between the pixels because the original image is degraded in the image formation process. Thus, the weights have to be calculated according to a preprocessed image. In this paper, the degraded image is preprocessed by an image smoothing scheme, which is mentioned to optimize the L _{0} norm of the image gradient.
2.2 Overlapping sparsity prior
The sparsitybased regularization has obtained promising results for various illposed image restoration problems. Group sparsity concept was first used in the onedimension denoising problem [17, 18]. Considering that groups of large values may arise anywhere in the signal domain, a group of large values may straddle two of the predefined groups, especially in general signal denoising and restoration problem. Hence, if the group structure is treated as a prior, it is suitable to formulate the problem into overlapping groups. And it is natural to extend the overlapping group sparsity prior to solve the twodimension problem such as image restoration. It has been used as a penalty term for TV models and proven to be effective for alleviating staircase effect [15].
2.3 ADMM
ADMM is a special splitting case of the augmented Lagrangian method by splitting the complex problem into simpler subproblems, which can be easily solved by efficient operators, such as DFT and shrinkage operator. It also can take advantage of separable structures of the split object functions, which allow a straightforward treatment of various regularize terms, such as total variation regularization [19]. The ADMM algorithm resolves a linear system like a matrix transformation that makes the problem twosided. On the one hand, the transformed matrix is related to the Hessian transform of the objective function carrying the secondorder information. This fact meets the excellent performance of computational efficiency, which has been proven to be faster than the classical iterative shrinkage thresholding (IST) algorithms [20], even than their improved versions [21]. On the other hand, due to the typical huge size of the inversion, it is limited to resolve the problem that can be handled efficiently using some particular structure. In this paper, we use the fast Fourier transform (FFT) to improve the efficiency of ADMM. The convergence is guaranteed by the classical ADMM theory in literatures [22, 23]. In this subsection, we briefly review its basic theory for an intuitive understanding.
Here, δ is the penalty parameter that controls the linear constraint. According to the theory of the ADMM, optimal solutions is equivalent to finding a saddle point \( L\left({x}_1^{\ast },{x}_2^{\ast },{q}^{\ast}\right) \) by the alternative minimizing scheme, such as keeping x _{2} and q fixed when minimizing L with respect to x _{1}. Then, we obtain the following ADMM iterative minimizing algorithm:
Iterative strategy of two subproblems is in the GaussSeidel fashion and thus the variables x _{1} and x _{2} can be solved separately in the alternating order. In [24], Eckstein and Bertsekas demonstrated that ADMM could be interpreted as an application of the proximal point algorithm. Meanwhile, a convergence result was proved for ADMM that allowed approximate computation of \( {x}_1^{k+1} \) and \( {x}_2^{k+1} \). Here, we restate their result as it applies to (14) under slightly weaker assumptions and in the case without over or under relaxation factors.
If there exists a saddle point of L(x _{1}, x _{2}, q) in Eq. (14), then \( {x}_1^k\to {x}_1^{\ast } \), \( {x}_2^k\to {x}_2^{\ast } \), and q ^{ k } → q ^{∗}, where \( \left({x}_1^{\ast },{x}_2^{\ast },{q}^{\ast}\right) \) is such a saddle point. On the other hand, if no such saddle point exists, then at least one of the sequences {μ _{ k }} or {v _{ k }} must be unbounded.
3 Proposed model and numerical algorithm
3.1 Image region division
3.2 Novel nonlocal total variation
To overcome the drawback of the local TV model, researchers have proposed to combine the nonlocal TV and the TV model to resolve some image processing tasks. Tang et al. [26] combined a local TV filter and the nonlocal means algorithm for image denoising. In his work, a local TV filter was used only for the rare patches, such as special edges and rare detail patches, and the nonlocal means algorithm was used for the rest. However, the model is limited to image deblurring because the image structures have been severely damaged and the similar structures cannot be found accurately. Inspired, we apply the nonlocal TV regularization only for the image details to protect the detail information and the overlapping group sparsity prior for the sparser image representation constraint. Especially in our previous work [27], it has been proven the performance of using the overlapping group sparsity prior in suppressing the staircase effect.
 (1)$$ {x}_1\coloneq f,{x}_2\coloneq \left[{v}_1,{v}_2,z\right]; $$
 (2)
\( {y}_1\left({x}_1\right)=\frac{\lambda }{2}{\left\Vert h\ast {f}_S+h\ast {f}_Dg\right\Vert}_2^2 \), y _{2}(x _{2}) = α‖ϕ(D _{(1)} f _{ S }) + ϕ(D _{(2)} f _{ S })‖ + ‖∇_{ ω } f _{ D }‖.
As the theoretical analysis in [22], positive values of the parameters β _{1}, β _{2}, and γ can ensure the convergence of ADMM, and we will be set them to specified values in the later experiments.
3.3 Numerical algorithm

f _{ D }being fixed, we search for \( {f}_S^{k+1} \) as a solution of

\( {f}_S^{k+1} \) being fixed, we obtain \( {f}_D^{k+1} \):
4 Results and discussion
4.1 Experimental settings
4.2 Parameters setting
4.3 Comparison with other stateofart methods
PSNR (dB) and CPU times for the deblurring experiments in Fig. 4 (PSNR/times)
Kernel  Image  Method [5]  Method [15]  Method [26]  Method [8]  Our method 

13 × 13  Boat  24.16/1.03  25.23/2.21  26.41/10.52  27.32/3.14  28.94/2.27 
Cameraman  23.27/3.12  24.42/4.53  25.23/19.95  25.94/6.25  26.74/5.23  
Lena  22.16/6.45  23.25/8.96  24.47/25.82  25.81/14.25  26.64/13.17  
Barbara  23.04/6.34  23.41/8.89  24.22/27.91  25.66/14.36  26.64/13.09  
Man  24.41/15.26  25.39/22.47  26.73/40.67  26.94/34.82  27.98/33.26  
17 × 17  Boat  23.86/1.10  24.73/2.22  25.61/9.49  26.45/3.32  27.87/2.25 
Cameraman  22.21/2.96  23.54/4.32  24.47/20.78  24.86/6.33  25.61/5.43  
Lena  21.23/6.31  22.32/8.87  23.73/26.78  24.31/14.32  25.34/13.21  
Barbara  20.79/6.47  22.14/9.01  23.56/27.67  24.73/14.29  25.47/13.18  
Man  20.14/15.34  21.89/21.96  23.22/42.12  24.67/35.67  25.78/34.64  
19 × 19  Boat  22.67/1.07  23.46/2.16  24.33/10.38  25.29/3.41  26.76/2.29 
Cameraman  22.34/3.06  23.54/4.37  24.47/24.64  25.26/6.45  25.27/5.21  
Lena  20.24/6.42  21.26/8.91  22.43/25.83  23.42/14.27  25.05/13.09  
Barbara  20.12/6.36  21.38/8.85  22.89/26.77  23.34/14.32  24.93/13.03  
Man  20.14/15.29  21.89/22.12  23.22/47.01  24.67/36.13  25.78/34.42  
21 × 21  Boat  21.34/1.12  22.29/2.21  23.47/10.42  24.37/3.56  25.48/2.31 
Cameraman  20.12/3.23  21.24/4.26  22.32/24.53  22.76/6.45  24.43/5.35  
Lena  19.31/6.37  20.02/8.66  21.13/28.67  21.37/13.98  23.25/13.11  
Barbara  19.42/6.41  20.14/8.72  21.13/28.73  21.47/13.78  23.65/13.02  
Man  19.84/15.42  20.96/22.37  22.04/47.12  23.16/36.16  24.87/34.23  
23 × 23  Boat  19.27/1.09  21.32/2.19  22.34/9.36  23.65/3.42  24.83/2.24 
Cameraman  19.02/3.23  20.31/4.26  21.26/23.53  22.24/6.45  24.36/5.17  
Lena  18.94/6.21  19.21/8.93  20.45/31.78  21.34/14.34  23.17/12.98  
Barbara  18.26/6.35  19.19/8.89  20.63/31.68  21.27/14.45  23.66/13.11  
Man  18.17/16.03  19.23/22.56  20.01/54.11  21.32/37.02  23.37/34.35 
Figure 7 shows the recovery results of the compared methods on classical tested image “Lena”. The feathers on her hat are of interest to our observation and comparison. As shown in Fig. 7a, the blurry and noisy image of “Lena” with the size 512 × 512 is degraded by a 13 × 13 average kernel and Gaussian white noise with σ = 3. There are obvious staircase effects in Fig. 7b, c. In Fig. 7d, the method also divide the image region by a gradient extraction scheme, which can preserve more details but costs much more time. In Fig. 7e, the method is proposed based on the nonlocal TV and uses a linearized proximal alternating minimization algorithm to improve the efficiency. Our result in Fig. 7f shows higher PSNR and shorter CPU time.
5 Conclusions
In this paper, a novel local and nonlocal total variation combination method has been proposed for image restoration in WSN. To apply the information properly, the image is divided into two regions by an image smoothing scheme. The local TV term is applied on the salient edges and constant regions, and the nonlocal term is applied on the details. The overlapping group sparsity is adopted as a priori constraint term in the proposed model to alleviate the staircase effect as much as possible. To improve the efficiency, we optimize the energy function by the ADMM algorithm, which has complex formulas but is easy to be programmed. Parameter selection criterion for two key parameters is discussed by numerical experiments, and it is the other main contribution. By comparing with other stateoftheart methods, it can be concluded that our method achieves higher efficiency and makes a good balance between alleviating staircase effects and preserving image details.
Declarations
Acknowledgements
We thank the reviewer for helping us to improve this paper. This work is supported by the National Science Foundation of China (Grant No.61501328) and Doctoral Found of Tianjin Normal University (Grant No.52XB1406).
Funding
Funding was from the National Science Foundation of China (Grant No.61501328) and Doctoral Found of Tianjin Normal University (Grant No.52XB1406).
Authors’ contributions
MS proposes the innovation ideas and theoretical analysis, and LF carries out experiments and data analysis.
Consent for publication
Written informed consent was obtained from the patient for the publication of this report and any accompanying images.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Q Liang, X Cheng, SC Huang, D Chen, Opportunistic sensing in wireless sensor networks: theory and application. IEEE Trans. Comput. 63(8), 2002–2010 (2014)MathSciNetView ArticleMATHGoogle Scholar
 F Zhao, L Wei, H Chen, Optimal time allocation for wireless information and power transfer in wireless powered communication systems. IEEE T. Veh. Technol 65(3), 1830–1835 (2016)View ArticleGoogle Scholar
 F Zhao, H Nie, H Chen, Group buying spectrum auction algorithm for fractional frequency reuses cognitive cellular systems. Ad Hoc Netw. 58, 239–246 (2017)View ArticleGoogle Scholar
 F Zhao, W Wang, H Chen, Q Zhang, Interference alignment and gametheoretic power allocation in MIMO heterogeneous sensor networks communications. Signal Process. 126, 173–179 (2016)View ArticleGoogle Scholar
 L Mysaker, XC Tai, Iterative image restoration combining total variation minimization and a secondorder functional. Int. J. Comput. Vis. 66(1), 5–18 (2006)View ArticleMATHGoogle Scholar
 D Chen, L Cheng, Alternative minimisation algorithm for nonlocal total variational image deblurring. IET Image Process. 4(5), 353–364 (2010)View ArticleGoogle Scholar
 F Rousseau, A nonlocal approach for image superresolution using intermodality priors. Med. Image Anal. 14(4), 594–605 (2010)View ArticleGoogle Scholar
 S Yun, H Woo, Linearized proximal alternating minimization algorithm for motion deblurring by nonlocal regularization. Pattern Recogn. 44(6), 1312–1326 (2011)View ArticleMATHGoogle Scholar
 DH Jiang, X Tan, YQ Liang, et al., A new nonlocal variational biregularized image restoration model via split Bregman method. EURASIP J. Image Vide 1, 1–10 (2015)Google Scholar
 J Liu, TZ Huang, XG Lv, et al., Highorder total variationbased Poissonian image deconvolution with spatially adapted regularization parameter. Appl. Math. Model. 45, 516–529 (2017)MathSciNetView ArticleGoogle Scholar
 M Zhu, T Chan, An efficient primaldual hybrid gradient algorithm for total variation image restoration (Ucla Cam Report, 2008)Google Scholar
 X Zhang, M Burger, S Osher, A unified primaldual algorithm framework based on Bregman iteration. J. Sci. Comput. 46(1), 20–46 (2011)MathSciNetView ArticleMATHGoogle Scholar
 S Bonettini, V Ruggiero, On the convergence of primal–dual hybrid gradient algorithms for total variation image restoration. J. Math. Imaging Vis 44(3), 236–253 (2012)MathSciNetView ArticleMATHGoogle Scholar
 ZJ Bai, D Cassani, M Donatelli, et al., A fast alternating minimization algorithm for total variation deblurring without boundary artifacts. J. Math. Anal. Appl. 415(1), 373–393 (2014)MathSciNetView ArticleMATHGoogle Scholar
 J Liu, TZ Huang, IW Selesnick, et al., Image restoration using total variation with overlapping group sparsity. Inform. Sciences 295(C), 232–246 (2015)MathSciNetView ArticleMATHGoogle Scholar
 G Gilboa, S Osher, Nonlocal operators with applications to image processing. Multiscale Model. Sim 7(3), 1005–1028 (2008)MathSciNetView ArticleMATHGoogle Scholar
 W Dong, L Zhang, G Shi, et al., Image deblurring and superresolution by adaptive sparse domain selection and adaptive regularization. IEEE T. Image Process 20(7), 1838–1857 (2011)MathSciNetView ArticleGoogle Scholar
 WZ Shao, HS Deng, Q Ge, et al., Regularized motion blurkernel estimation with adaptive sparse image prior learning. Pattern Recogni 51, 402–424 (2016)View ArticleGoogle Scholar
 MS Almeida, M Figueiredo, Deconvolving images with unknown boundaries using the alternating direction method of multipliers. IEEE T. Image Process 22(8), 3074–3086 (2013)MathSciNetView ArticleGoogle Scholar
 I Daubechies, M Defrise, CD Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pur. Appl. Math 57(11), 1413–1457 (2004)MathSciNetView ArticleMATHGoogle Scholar
 SJ Wright, RD Nowak, M Figueiredo, et al., Sparse reconstruction by separable approximation. IEEE Press 57(7), 2479–2493 (2009)MathSciNetGoogle Scholar
 W Deng, W Yin, On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66(3), 889–916 (2016)MathSciNetView ArticleMATHGoogle Scholar
 B He, H Yang, Some convergence properties of a method of multipliers for linearly constrained monotone variational inequalities. Oper. Res. Lett. 23(3), 151–161 (1998)MathSciNetView ArticleMATHGoogle Scholar
 J Eckstein, DP Bertsekas, On the DouglasRachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1–3), 293–318 (1992)MathSciNetView ArticleMATHGoogle Scholar
 L Xu, C Lu, Y Xu, J Jia, Image smoothing via L0 gradient minimization. Acm T.Graphic 30(6), 1–12 (2011)Google Scholar
 S Tang, W Gong, W Li, et al., Nonblind image deblurring method by local and nonlocal total variation models. Signal Process. 94(1), 339–349 (2014)View ArticleGoogle Scholar
 M Shi, T Han, S Liu, Total variation image restoration using hyperLaplacian prior with overlapping group sparsity. Signal Process. 126, 65–76 (2015)View ArticleGoogle Scholar
 X Zhang, M Burger, X Bresson, et al., Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. Siam J. Imaging Sci 3(3), 253–276 (2010)MathSciNetView ArticleMATHGoogle Scholar