 Research
 Open Access
 Published:
A novel local and nonlocal total variation combination method for image restoration in wireless sensor networks
EURASIP Journal on Wireless Communications and Networking volume 2017, Article number: 167 (2017)
Abstract
In this paper, we propose a novel local and nonlocal total variation combination method for image restoration in wireless sensor networks (WSN), which plays an important role in improving the quality of the transmitted image. First, the degrade image is preprocessed by an image smoothing scheme to divide the image into two regions. One contains edges and flat regions by the local TV term. The other is rich in image details and regularized by the nonlocal TV term. Then, the alternating direction method of multipliers (ADMM) algorithm is adopted to optimize the complex object function, and two key parameters are discussed for better performance. Finally, we compare our method with several recent stateoftheart methods and illustrate the efficiency and performance of the proposed model by experimental results in peak signal to noise ratio (PSNR) and computing time.
Introduction
With the rapid development of wireless sensor networks, there are higher requirements for signal transmission and processing [1,2,3,4]. However, for such a twodimensional image signal, it is inevitably degraded in the process of image acquisition, transmission and processing, and image restoration techniques are needed to improve the quality of the obtained image. Image restoration is one of the most fundamental issues in imaging science and other important applications. It plays an important role in many midlevel and highlevel image processing tasks. In this paper, we focus on spatially invariant system and formulate a common degradation model as
where g is the blurred and noisy image, f is the desired true image, * represents the convolution, and n denotes the additive Gaussian white noise with zero mean. h is the linear spatially invariant blur kernel, which is usually modeled as a blurring matrix constructed from the discrete point spread function (PSF). If the PSF is known, the problem is nonblind deconvolution. If the PSF is unknown, then the given problem becomes blind deconvolution. In this paper, we only focus on the nonblind image restoration.
Problem setup
For the image restoration problem, we seek to estimate the original image f by the following variational formulation:
where ‖.‖_{2} denotes the Euclidean norm, ϕ(f) is usually called the regularization term, and λ > 0 is a regularization parameter that controls the balance between the above fidelity term and regularization term. Even if the blur kernel is known, the problem is still highly illposed and there is difficulty in recovering the original sharp image. This is because that the blur kernel is a kind of a lowpass filter, which tends to reduce the highfrequency information such as textures and edges. Hence, it needs to be regularized by a proper constraint model. The classical regularization model is the TV model, which is referred to the local TV [5] with the form
where ‖⋅‖_{2} denotes the L _{2} norm, \( {\left\Vert \cdot \right\Vert}_2^2 \) denotes the square of the ‖⋅‖_{2}, and ‖f‖_{ TV } stands for the total variation of the image and is often defined as
Here, ∇ is the local gradient operator, and \( {\left\Vert \nabla f\right\Vert}_1=\sum \sqrt{{\left({\nabla}_{(1)}u\right)}^2+{\left({\nabla}_{(2)}u\right)}^2} \). The ∇_{(1)} u and ∇_{(2)} u represent the local firstorder differences of f in the horizontal and vertical directions respectively. The local TV model has been proven to have good performance in preserving edges due to its linear penalty on differences between adjacent pixels. However, it yields to staircase artifacts that smooth image details. Therefore, it is of great importance to model the appropriate prior knowledge from nature images or impose more appropriate prior assumption to constrain the solution. Actually, the underlying motivation in this paper is to establish appropriate regularization terms and improve the efficiency of the numerical algorithm for the complex object function.
Related works
In recent years, the nonlocal TV has been successfully used in image processing tasks [6, 7]. It uses the whole image pixel information not the adjacent pixel information and combines the variational framework and the nonlocal selfsimilarity constraint to restore the image details. This is the main difference with the local TV model. However, if the nonlocal selfsimilarity constraint is considered as the only constraint, similar image structures still cannot be estimated accurately. When the TV model and the nonlocal selfsimilarity constraint are both used on the entire image, the performance of their method will be compromised under the limitation of the TV model [8]. Besides, since the nonlocal total variation requires weighted difference between pixels in the whole image, it is more time consuming and needs more efficient algorithms. The SplitingBregman method has been proposed to solve the nonlocal TV image restoration problem, but the efficiency is unsatisfactory [9, 10]. It not only needs the outer iteration in the subproblem but also the inner iterations for the nonlocal Laplacian operator. Zhu et al. propose an efficient primaldual hybrid gradient algorithm, which alternates between the primal and dual formulations for total variation [11]. A unified primaldual algorithm framework is proposed to resolve the local total variation problem with L _{1} basis pursuit and TVL _{2} minimization [12]. Bonettini et al. establish the convergence of a general primaldual method for nonsmooth convex optimization problems, whose structure is typical in the imaging framework [13]. In these approaches, many parameters have to be chosen and causes time consuming. To overcome this drawback, an alternating direction minimization method of multipliers (ADMM) has been widely used in recent imageprocessing tasks [14, 15]. Its outstanding performance is that there is no need to resolve the subproblems and no inner iterations. Hence, the problem needs to be solved from two aspects: one is how to choose a good regularization functional φ(f), which is an active research area in image science, and the other is how to shorten the computation time without yielding staircase artifacts, which is also a challenging problem.
The rest of the paper is organized as follows. In Section 2, we introduce the definition of the nonlocal total variation and the principle of the overlapping group sparsity and the ADMM algorithm. They are the essential tools in our method. Section 3 introduces the object function of the proposed model and discusses the parameter selection criteria. In Section 4, we carry out experiments and compare ours with other stateoftheart methods. Finally, we make a conclusion in Section 5.
Preliminaries
Nonlocal total variation
Firstly, we give the following notations that will be used in this paper. Assuming the size of images in this paper is m × n, and the image matrix rows from stacking up are denoted as mn vectors. Denote the Euclidean space R ^{mn} as V and define Q = V × V. The ith components of x ∈ V and y ∈ Q are denoted as x _{ i } ∈ R and \( {y}_i={\left({y}_i^{(1)},{y}_i^{(2)}\right)}^{\mathrm{T}}\in {R}^2 \), respectively. Inner products and Euclidean norms are defined as
D _{(1)}, D _{(2)} ∈ R ^{mn × mn} are mn × mn gradient matrices in the vertical and horizontal directions, and we have Df _{ i } = [(D _{(1)} f _{ i }), (D _{(2)} f _{ i })]^{T} for each f ∈ V. By staking the ith rows of D _{(1)} and D _{(2)} together, we get a towrow matrix Df _{ i } ∈ R ^{2 × mn}. Define the global firstorder finite difference operator as D = [(D ^{(1)})^{T}, (D ^{(2)})^{T}]^{T} ∈ R ^{2mn × mn}. We consider Df ∈ Q and assume images in this paper under the periodic boundary condition. The discrete gradient operators are defined by \( {\left({D}_{(1)}f\right)}_{i,j}=\left\{\begin{array}{c}\hfill {f}_{i+1,j}{f}_{i,j}\kern0.75em \mathrm{if}\kern0.75em i<m\hfill \\ {}\hfill {f}_{i+1,j}{f}_{m,j}\kern0.5em \mathrm{if}\kern0.75em i=m\hfill \end{array}\right. \) and \( {\left({D}_{(2)}f\right)}_{i,j}=\left\{\begin{array}{c}\hfill {f}_{i,j+1}{f}_{i,j}\kern0.75em \mathrm{if}\kern0.75em j<n\hfill \\ {}\hfill {f}_{i,1}{f}_{i,n}\kern1.5em \mathrm{if}\kern0.75em j=n\hfill \end{array}\right. \).
Then, we use the definitions and notations of the local total variation introduced in [16]. Let Ω ⊂ R ^{2} and x ∈ Ω, u(x) is a real function Ω → R and ω is a nonnegative symmetric weight function, i.e., ω(x, y) = ω(y, x). The local gradient ∇_{ ω } u(x) is defined as the vector of all partial differences ∇_{ ω } u(x, ⋅) at x:
where ω(x, y) is the weight function between x and y defined based on the image u. The graph divergence div_{ ω } of a vector p : Ω × Ω → R can be defined as
The weight function is defined as the nonlocal means weight function:
where G _{ a } is the Gaussian kernel with standard deviation a, h is the filtering parameter related to the standard variance of the noise, and the ⋅ in f(x + ⋅) denotes a square patch centered by point x. When the reference image f is known, the nonlocal means filter is a linear operator. Now, we can define the nonlocal TV norm with the isotropic L _{1} norm of the weight graph gradient ∇_{ ω } u(x):
The main purpose of the nonlocal regularization is to generalize the local gradient and divergence concepts into the local form. Generally, a reference image is expected as close as possible to the original image to obtain the weights more exactly. However, it is hard to get the price weights between the pixels because the original image is degraded in the image formation process. Thus, the weights have to be calculated according to a preprocessed image. In this paper, the degraded image is preprocessed by an image smoothing scheme, which is mentioned to optimize the L _{0} norm of the image gradient.
Overlapping sparsity prior
The sparsitybased regularization has obtained promising results for various illposed image restoration problems. Group sparsity concept was first used in the onedimension denoising problem [17, 18]. Considering that groups of large values may arise anywhere in the signal domain, a group of large values may straddle two of the predefined groups, especially in general signal denoising and restoration problem. Hence, if the group structure is treated as a prior, it is suitable to formulate the problem into overlapping groups. And it is natural to extend the overlapping group sparsity prior to solve the twodimension problem such as image restoration. It has been used as a penalty term for TV models and proven to be effective for alleviating staircase effect [15].
In [15], the vector s ∈ R ^{n} with a kpoint group has been defined as
Here, s _{ i, k } denotes a block of k contiguous samples of s starting from the ith index. A group sparsity regularizer is defined as
For the twodimensional case, a k × k point group of the image f ∈ R ^{n × n} is defined as
By stacking the k columns of the matrix \( {\tilde{f}}_{i,j,k} \), i.e., \( {f}_{i,j,k}={\tilde{f}}_{i,j,k}\left(:\right) \), a vector is obtained and the overlapping group sparsity (OGS) functional of a twodimensional array can be defined as
The regularization term ϕ(f) based on the image gradient in the vertical and horizontal direction is denoted as
It is a special case that is commonly mentioned as anisotroapic TV functional if k = 1, and the usual TV regularization term is defined as
where the discrete gradient operator ∇ : R ^{mn} → R ^{2 × mn} is defined by (∇f)_{ i, j } = ((D _{(1)} f)_{ i, j }, (D _{(2)} f)_{ i, j }).
ADMM
ADMM is a special splitting case of the augmented Lagrangian method by splitting the complex problem into simpler subproblems, which can be easily solved by efficient operators, such as DFT and shrinkage operator. It also can take advantage of separable structures of the split object functions, which allow a straightforward treatment of various regularize terms, such as total variation regularization [19]. The ADMM algorithm resolves a linear system like a matrix transformation that makes the problem twosided. On the one hand, the transformed matrix is related to the Hessian transform of the objective function carrying the secondorder information. This fact meets the excellent performance of computational efficiency, which has been proven to be faster than the classical iterative shrinkage thresholding (IST) algorithms [20], even than their improved versions [21]. On the other hand, due to the typical huge size of the inversion, it is limited to resolve the problem that can be handled efficiently using some particular structure. In this paper, we use the fast Fourier transform (FFT) to improve the efficiency of ADMM. The convergence is guaranteed by the classical ADMM theory in literatures [22, 23]. In this subsection, we briefly review its basic theory for an intuitive understanding.
The ADMM theory was proposed to solve the optimization problem with the following constrained separable subproblem
where x _{ i } ∈ X _{ i }, i = 1, 2, y _{ i } : X _{ i } → R are closed convex function, \( {X}_i\in {R}^{m_i} \) are nonempty closed convex sets, \( {A}_i\in {R}^{l\times {m}_i} \) are linear transforms, and a ∈ R ^{l} is a given vector. With q ∈ R ^{l} as a Lagrange multiplier to the linear constraint, the augmented Lagrangian function for the problem (16) is
Here, δ is the penalty parameter that controls the linear constraint. According to the theory of the ADMM, optimal solutions is equivalent to finding a saddle point \( L\left({x}_1^{\ast },{x}_2^{\ast },{q}^{\ast}\right) \) by the alternative minimizing scheme, such as keeping x _{2} and q fixed when minimizing L with respect to x _{1}. Then, we obtain the following ADMM iterative minimizing algorithm:
Iterative strategy of two subproblems is in the GaussSeidel fashion and thus the variables x _{1} and x _{2} can be solved separately in the alternating order. In [24], Eckstein and Bertsekas demonstrated that ADMM could be interpreted as an application of the proximal point algorithm. Meanwhile, a convergence result was proved for ADMM that allowed approximate computation of \( {x}_1^{k+1} \) and \( {x}_2^{k+1} \). Here, we restate their result as it applies to (14) under slightly weaker assumptions and in the case without over or under relaxation factors.
Theorem 2.1 (Eckstein, Bertsekas [24]) Consider the problem (13) where y _{1} and y _{2} are closed proper convex functions, A _{1} has full column rank and y _{2}(x _{2}) + ‖A _{2} x _{2}‖^{2} is strictly convex. Let q, x _{2} ∈ R be arbitrary and δ > 0. Suppose we are given sequences {μ _{ k }} and {v _{ k }} such that μ _{ k } ≥ 0, v _{ k } ≥ 0, \( {\sum}_{k=0}^{\infty }{\mu}_k<\infty \), and \( {\sum}_{k=0}^{\infty }{v}_k<\infty \). Suppose that
If there exists a saddle point of L(x _{1}, x _{2}, q) in Eq. (14), then \( {x}_1^k\to {x}_1^{\ast } \), \( {x}_2^k\to {x}_2^{\ast } \), and q ^{k} → q ^{∗}, where \( \left({x}_1^{\ast },{x}_2^{\ast },{q}^{\ast}\right) \) is such a saddle point. On the other hand, if no such saddle point exists, then at least one of the sequences {μ _{ k }} or {v _{ k }} must be unbounded.
Proposed model and numerical algorithm
Image region division
The image is preprocessed by an image smoothing scheme same as [25], take experiments on “Babara” as an example. As shown in Fig. 1, Fig. 1a is the original image, Fig. 1b is the salient edges and constant regions of (a), and Fig. 1c is the details of (a) and obtained by (a) minus (b). Fig. 1d is the blurred image and the blur kernel is shown in the lower left, with the size 19 × 19. Figure 1e and (f) are edges and constant regions, and details of (d), respectively. Comparing the extracted edges and constant regions in Fig. 1b, e, we cannot see much obvious difference, although the blurred image (d) is seriously damaged. Meanwhile, the details of the blurred image shown in (f) are more clear than those of the original image (a), from the point of view.
Novel nonlocal total variation
To overcome the drawback of the local TV model, researchers have proposed to combine the nonlocal TV and the TV model to resolve some image processing tasks. Tang et al. [26] combined a local TV filter and the nonlocal means algorithm for image denoising. In his work, a local TV filter was used only for the rare patches, such as special edges and rare detail patches, and the nonlocal means algorithm was used for the rest. However, the model is limited to image deblurring because the image structures have been severely damaged and the similar structures cannot be found accurately. Inspired, we apply the nonlocal TV regularization only for the image details to protect the detail information and the overlapping group sparsity prior for the sparser image representation constraint. Especially in our previous work [27], it has been proven the performance of using the overlapping group sparsity prior in suppressing the staircase effect.
Now, a novel nonlocal TV based image restoration model is defined as follows:
where λ and α are regularization parameters. The original image f is divided as f _{ S } and f _{ D } by the above mentioned gradient extraction scheme. f _{ S } denotes the salient edges and constant regions, and f _{ D } denotes the image details. In order to apply the ADMM idea, we introduce auxiliary variables v _{1} = D _{(1)} f, v _{2} = D _{(2)} f, and z = f. Compared with the standard ADMM algorithm in Eq. (16), the problem (18) satisfies the proposed framework with the following specification:

(1)
$$ {x}_1\coloneq f,{x}_2\coloneq \left[{v}_1,{v}_2,z\right]; $$

(2)
\( {y}_1\left({x}_1\right)=\frac{\lambda }{2}{\left\Vert h\ast {f}_S+h\ast {f}_Dg\right\Vert}_2^2 \), y _{2}(x _{2}) = α‖ϕ(D _{(1)} f _{ S }) + ϕ(D _{(2)} f _{ S })‖ + ‖∇_{ ω } f _{ D }‖.
Then, the Augmented Lagrangian function of the minimization problem (18) is defined as follows:
where β _{1}, β _{2} > 0 are linear constraints corresponding to the auxiliary variables z and v, and ε and η are the Lagrange multipliers. Starting at f = f ^{k}, ε = ε ^{k}, and η = η ^{k}, applying ADMM yields the iterative scheme
As the theoretical analysis in [22], positive values of the parameters β _{1}, β _{2}, and γ can ensure the convergence of ADMM, and we will be set them to specified values in the later experiments.
Numerical algorithm
According to the ADMM in Algorithm 1, we employ the alternative optimization to estimate the divided image regions f _{ S } and f _{ D }.

f _{ D }being fixed, we search for \( {f}_S^{k+1} \) as a solution of
The minimization of Eq.(23) with respect to f _{ S } is a least square problem and can be resolved by the fast Fourier transform (FFT), which only requires O(n log(n)) arithmetic operations, here n represents the size of the given image. The corresponding normal equation of Eq.(23) is
where H denotes the blurring matrix formed by h. Considering v _{1} and v _{2} with the same expression, the subproblem corresponds to the following optimization problem
which can be attacked with the extremely fast shrinkage formula and obtain

\( {f}_S^{k+1} \) being fixed, we obtain \( {f}_D^{k+1} \):
It is a classical nonlocal TV image deblurring problem. Here, we refer to a fast nonlocal TV deconvolution algorithm in [28]. The image smoothing scheme should be performed at each iteration and \( {f}_S^{k+1} \) is used as a reference image in Eq.(27), which is helpful to improve the accuracy of the algorithm. Thus, our proposed method is summarized as follows:
Results and discussion
Experimental settings
In order to demonstrate the viability and efficiency of the proposed method, experiments are carried out on various image and kernels and compared with other stateoftheart methods. Test images are some classical images, such as “Boat” with the size 128 × 128, “Cameraman” with the size 256 × 256, “Lena” with the size 512 × 512, “Babara” with the size 512 × 512, and “Man” with the size 1024 × 1024. And five different motion blur kernels are used in the experiments. They are freely available from http://www.wisdom.weizmann.ac.il/^{~}levina/ and frequently used in present researches for image restoration, as shown in Fig. 2a. The size of each kernel is 13 × 13, 17 × 17, 19 × 19, 21 × 21, and 23 × 23, as shown in Fig. 2b. We also try the average blur kernel and add the Gaussian noise to degrade the image. The stop criterion is ‖f ^{m + 1} − f ^{m}‖_{2}/‖f ^{m}‖_{2} < 0.001. The PSNR is used to evaluate the quality of recovery results. All experiments are conducted by Matlab programming on a desktop PC with 2.3GHz Interl Core computer and 4.0 GB memory.
Parameters setting
The PSNR is defined as
where \( \widehat{f}\left(i,j\right) \) denotes the recovered image and f(i, j) denotes the true image. As mentioned, the fast Fourier transforms in the algorithm, boundary problems needs to be processed; otherwise, it will affect the PSNR seriously. The “edgetaper” function in Matlab is chosen as the periodic boundary condition. There are many auxiliary variables and coefficients in the object formulation that the form seems very complicated, as shown in Eq.(19) and (23), especially in the algorithm description and theoretical analysis. In actual fact, auxiliary variables are helpful in programming and reducing computational redundancy.
Considering the penalty parameters β _{1},β _{2}, and γ, theoretically, any positive values ensure the convergence of ADMM. In practice, there are usual two ways to determine them. One tries some values and picks a certain value with satisfactory performance; the other applies selfadaptive adjustment rules with an arbitrary initial guess but requires expensive computation. In our experience, a welltuned constant value has the same performance to the value obtained by the selfadaptive strategy. For our proposed model, we tune and set β _{1} = 0.025 and β _{2} = 0.05. Parameters λ = 0.04, α = 0.05, and γ = λ/3 = 0.012 are chosen based on the experience of our previous work; for more details, refer to. In this subsection, we mainly focus on the iterative parameter k and the window of nonlocal TV operator. As shown in Fig. 1, two curves are the numerical analysis of the recovery for “blurred Babara” image (512 × 512) in Fig. 1d by our proposed method. Figure 3a shows that with the iterations increasing, the objective function value is getting smaller and the algorithm tends to be stable. Figure 3b shows that PSNR has not been significantly improved as the number of iterations increasing to 50. Hence, in the following experiments, we set the number of iterations k = 50.
Then, the blurred image in Fig. 1d is still used to discuss the weight update for nonlocal operator. An appropriate size of the search window can guarantee the better PSNR and less CPU time. The window size is initially set to 5 × 5 and increases by two steps. We also add the date with the window size 3 × 3. Computation time and PSNR are recorded at the same time, and the results are shown in Fig. 4. Figure 4a shows that as the size becomes larger, the computing time will be longer. However, PSNR has not been greatly improved in Fig. 4b. This conclusion is similar to ref. [8], which also discusses the size of the search window. Particularly, although there are obvious difference in PSNR values when the size of search window is set to 9 × 9 and 11 × 11, the CPU time becomes longer. In order to shorten the CPU times and obtain a reasonable PSNR, we use 7 × 7 search window for the following numerical experiments.
Comparison with other stateofart methods
In this subsection, we use above images and blur kernels in Fig. 3 to verify the efficiency of the proposed model and compare with four stateoftheart methods. They are the nonlocal TV method [5], the overlapping group sparsity TV method [15], the combination of local and nonlocal method using a Bregmanized operator splitting iterative scheme [26], and the nonlocal TVbased method using a linearized proximal alternating minimization algorithm [8]. Numerical results of the PSNR and CPU times are reported in the Table 1. It can be noted that our method obtains higher PSNR and shorter CPU times than other methods.
Because our method has the image division step in each iteration, therefore, it does not significantly shorten the computing time. As a compromise, we achieve a better recovery effect that suppresses staircase effects and protects image details and edges. Experiments on the blurred “Babara” image in Fig. 1d are shown in Fig. 5, the avatar area in the Babara image is cropped and the details are more clear. Figure 5a is the blurred image by a 19 × 19 blur kernel. Figure 5b is the recovered image by the local TV method, which cause obvious staircase effects. In Fig. 5c, the staircase effect is alleviated but unsatisfactory. In Fig. 5d, more details are restored but consumes too long time. In Fig. 5e, the method effectively saves the computing time and get a relatively satisfactory recovery effect. Our result is shown in Fig. 5f, by comparing the pattern on the scarf, it can be drawn that our method can suppress staircase effects and preserve more details.
We also test on the blurred and noisy image, as shown in Figs. 6 and 7. The image “Cameraman” with the size 256 × 256 is degraded by a 9 × 9 average kernel and Gaussian white noise with σ = 3, as shown in Fig. 6a. In Fig. 6b, there are serious staircase effects by the local TV method, which shows the drawbacks of the total variationbased framework. In Fig. 6c, the noise is removed but the staircase effect is still very obvious. In Fig. 6d, the staircase effect is mitigated but consumes too long time, which is attributed to the numerical processing of the algorithm. By comparing Fig. 6e, f, we can note that our method obtain better result and save more computing time.
Figure 7 shows the recovery results of the compared methods on classical tested image “Lena”. The feathers on her hat are of interest to our observation and comparison. As shown in Fig. 7a, the blurry and noisy image of “Lena” with the size 512 × 512 is degraded by a 13 × 13 average kernel and Gaussian white noise with σ = 3. There are obvious staircase effects in Fig. 7b, c. In Fig. 7d, the method also divide the image region by a gradient extraction scheme, which can preserve more details but costs much more time. In Fig. 7e, the method is proposed based on the nonlocal TV and uses a linearized proximal alternating minimization algorithm to improve the efficiency. Our result in Fig. 7f shows higher PSNR and shorter CPU time.
Conclusions
In this paper, a novel local and nonlocal total variation combination method has been proposed for image restoration in WSN. To apply the information properly, the image is divided into two regions by an image smoothing scheme. The local TV term is applied on the salient edges and constant regions, and the nonlocal term is applied on the details. The overlapping group sparsity is adopted as a priori constraint term in the proposed model to alleviate the staircase effect as much as possible. To improve the efficiency, we optimize the energy function by the ADMM algorithm, which has complex formulas but is easy to be programmed. Parameter selection criterion for two key parameters is discussed by numerical experiments, and it is the other main contribution. By comparing with other stateoftheart methods, it can be concluded that our method achieves higher efficiency and makes a good balance between alleviating staircase effects and preserving image details.
References
 1.
Q Liang, X Cheng, SC Huang, D Chen, Opportunistic sensing in wireless sensor networks: theory and application. IEEE Trans. Comput. 63(8), 2002–2010 (2014)
 2.
F Zhao, L Wei, H Chen, Optimal time allocation for wireless information and power transfer in wireless powered communication systems. IEEE T. Veh. Technol 65(3), 1830–1835 (2016)
 3.
F Zhao, H Nie, H Chen, Group buying spectrum auction algorithm for fractional frequency reuses cognitive cellular systems. Ad Hoc Netw. 58, 239–246 (2017)
 4.
F Zhao, W Wang, H Chen, Q Zhang, Interference alignment and gametheoretic power allocation in MIMO heterogeneous sensor networks communications. Signal Process. 126, 173–179 (2016)
 5.
L Mysaker, XC Tai, Iterative image restoration combining total variation minimization and a secondorder functional. Int. J. Comput. Vis. 66(1), 5–18 (2006)
 6.
D Chen, L Cheng, Alternative minimisation algorithm for nonlocal total variational image deblurring. IET Image Process. 4(5), 353–364 (2010)
 7.
F Rousseau, A nonlocal approach for image superresolution using intermodality priors. Med. Image Anal. 14(4), 594–605 (2010)
 8.
S Yun, H Woo, Linearized proximal alternating minimization algorithm for motion deblurring by nonlocal regularization. Pattern Recogn. 44(6), 1312–1326 (2011)
 9.
DH Jiang, X Tan, YQ Liang, et al., A new nonlocal variational biregularized image restoration model via split Bregman method. EURASIP J. Image Vide 1, 1–10 (2015)
 10.
J Liu, TZ Huang, XG Lv, et al., Highorder total variationbased Poissonian image deconvolution with spatially adapted regularization parameter. Appl. Math. Model. 45, 516–529 (2017)
 11.
M Zhu, T Chan, An efficient primaldual hybrid gradient algorithm for total variation image restoration (Ucla Cam Report, 2008)
 12.
X Zhang, M Burger, S Osher, A unified primaldual algorithm framework based on Bregman iteration. J. Sci. Comput. 46(1), 20–46 (2011)
 13.
S Bonettini, V Ruggiero, On the convergence of primal–dual hybrid gradient algorithms for total variation image restoration. J. Math. Imaging Vis 44(3), 236–253 (2012)
 14.
ZJ Bai, D Cassani, M Donatelli, et al., A fast alternating minimization algorithm for total variation deblurring without boundary artifacts. J. Math. Anal. Appl. 415(1), 373–393 (2014)
 15.
J Liu, TZ Huang, IW Selesnick, et al., Image restoration using total variation with overlapping group sparsity. Inform. Sciences 295(C), 232–246 (2015)
 16.
G Gilboa, S Osher, Nonlocal operators with applications to image processing. Multiscale Model. Sim 7(3), 1005–1028 (2008)
 17.
W Dong, L Zhang, G Shi, et al., Image deblurring and superresolution by adaptive sparse domain selection and adaptive regularization. IEEE T. Image Process 20(7), 1838–1857 (2011)
 18.
WZ Shao, HS Deng, Q Ge, et al., Regularized motion blurkernel estimation with adaptive sparse image prior learning. Pattern Recogni 51, 402–424 (2016)
 19.
MS Almeida, M Figueiredo, Deconvolving images with unknown boundaries using the alternating direction method of multipliers. IEEE T. Image Process 22(8), 3074–3086 (2013)
 20.
I Daubechies, M Defrise, CD Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pur. Appl. Math 57(11), 1413–1457 (2004)
 21.
SJ Wright, RD Nowak, M Figueiredo, et al., Sparse reconstruction by separable approximation. IEEE Press 57(7), 2479–2493 (2009)
 22.
W Deng, W Yin, On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66(3), 889–916 (2016)
 23.
B He, H Yang, Some convergence properties of a method of multipliers for linearly constrained monotone variational inequalities. Oper. Res. Lett. 23(3), 151–161 (1998)
 24.
J Eckstein, DP Bertsekas, On the DouglasRachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1–3), 293–318 (1992)
 25.
L Xu, C Lu, Y Xu, J Jia, Image smoothing via L0 gradient minimization. Acm T.Graphic 30(6), 1–12 (2011)
 26.
S Tang, W Gong, W Li, et al., Nonblind image deblurring method by local and nonlocal total variation models. Signal Process. 94(1), 339–349 (2014)
 27.
M Shi, T Han, S Liu, Total variation image restoration using hyperLaplacian prior with overlapping group sparsity. Signal Process. 126, 65–76 (2015)
 28.
X Zhang, M Burger, X Bresson, et al., Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. Siam J. Imaging Sci 3(3), 253–276 (2010)
Acknowledgements
We thank the reviewer for helping us to improve this paper. This work is supported by the National Science Foundation of China (Grant No.61501328) and Doctoral Found of Tianjin Normal University (Grant No.52XB1406).
Funding
Funding was from the National Science Foundation of China (Grant No.61501328) and Doctoral Found of Tianjin Normal University (Grant No.52XB1406).
Author information
Affiliations
Contributions
MS proposes the innovation ideas and theoretical analysis, and LF carries out experiments and data analysis.
Corresponding author
Ethics declarations
Consent for publication
Written informed consent was obtained from the patient for the publication of this report and any accompanying images.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Shi, M., Feng, L. A novel local and nonlocal total variation combination method for image restoration in wireless sensor networks. J Wireless Com Network 2017, 167 (2017). https://doi.org/10.1186/s136380170951y
Received:
Accepted:
Published:
Keywords
 Image restoration
 Nonlocal total variation
 Alternating direction minimization of multiplier
 Overlapping group sparsity