Skip to main content

A novel local and nonlocal total variation combination method for image restoration in wireless sensor networks

Abstract

In this paper, we propose a novel local and nonlocal total variation combination method for image restoration in wireless sensor networks (WSN), which plays an important role in improving the quality of the transmitted image. First, the degrade image is preprocessed by an image smoothing scheme to divide the image into two regions. One contains edges and flat regions by the local TV term. The other is rich in image details and regularized by the nonlocal TV term. Then, the alternating direction method of multipliers (ADMM) algorithm is adopted to optimize the complex object function, and two key parameters are discussed for better performance. Finally, we compare our method with several recent state-of-the-art methods and illustrate the efficiency and performance of the proposed model by experimental results in peak signal to noise ratio (PSNR) and computing time.

1 Introduction

With the rapid development of wireless sensor networks, there are higher requirements for signal transmission and processing [1,2,3,4]. However, for such a two-dimensional image signal, it is inevitably degraded in the process of image acquisition, transmission and processing, and image restoration techniques are needed to improve the quality of the obtained image. Image restoration is one of the most fundamental issues in imaging science and other important applications. It plays an important role in many mid-level and high-level image processing tasks. In this paper, we focus on spatially invariant system and formulate a common degradation model as

$$ g=h\ast f+n, $$
(1)

where g is the blurred and noisy image, f is the desired true image, * represents the convolution, and n denotes the additive Gaussian white noise with zero mean. h is the linear spatially invariant blur kernel, which is usually modeled as a blurring matrix constructed from the discrete point spread function (PSF). If the PSF is known, the problem is non-blind deconvolution. If the PSF is unknown, then the given problem becomes blind deconvolution. In this paper, we only focus on the non-blind image restoration.

1.1 Problem setup

For the image restoration problem, we seek to estimate the original image f by the following variational formulation:

$$ \arg \underset{f}{\min}\left\{\frac{1}{2}{\left\Vert g-h\ast f\right\Vert}_2^2+\lambda \phi (f)\right\}. $$
(2)

where .2 denotes the Euclidean norm, ϕ(f) is usually called the regularization term, and λ > 0 is a regularization parameter that controls the balance between the above fidelity term and regularization term. Even if the blur kernel is known, the problem is still highly ill-posed and there is difficulty in recovering the original sharp image. This is because that the blur kernel is a kind of a low-pass filter, which tends to reduce the high-frequency information such as textures and edges. Hence, it needs to be regularized by a proper constraint model. The classical regularization model is the TV model, which is referred to the local TV [5] with the form

$$ \arg \underset{f}{\min}\left\{\frac{1}{2}{\left\Vert g-h\ast f\right\Vert}_2^2+\lambda {\left\Vert f\right\Vert}_{TV}\right\}, $$
(3)

where 2 denotes the L 2 norm, \( {\left\Vert \cdot \right\Vert}_2^2 \) denotes the square of the 2, and f TV stands for the total variation of the image and is often defined as

$$ {\left\Vert f\right\Vert}_{TV}={\left\Vert \nabla f\right\Vert}_1. $$
(4)

Here, is the local gradient operator, and \( {\left\Vert \nabla f\right\Vert}_1=\sum \sqrt{{\left({\nabla}_{(1)}u\right)}^2+{\left({\nabla}_{(2)}u\right)}^2} \). The (1) u and (2) u represent the local first-order differences of f in the horizontal and vertical directions respectively. The local TV model has been proven to have good performance in preserving edges due to its linear penalty on differences between adjacent pixels. However, it yields to staircase artifacts that smooth image details. Therefore, it is of great importance to model the appropriate prior knowledge from nature images or impose more appropriate prior assumption to constrain the solution. Actually, the underlying motivation in this paper is to establish appropriate regularization terms and improve the efficiency of the numerical algorithm for the complex object function.

1.2 Related works

In recent years, the nonlocal TV has been successfully used in image processing tasks [6, 7]. It uses the whole image pixel information not the adjacent pixel information and combines the variational framework and the nonlocal self-similarity constraint to restore the image details. This is the main difference with the local TV model. However, if the nonlocal self-similarity constraint is considered as the only constraint, similar image structures still cannot be estimated accurately. When the TV model and the nonlocal self-similarity constraint are both used on the entire image, the performance of their method will be compromised under the limitation of the TV model [8]. Besides, since the nonlocal total variation requires weighted difference between pixels in the whole image, it is more time consuming and needs more efficient algorithms. The Spliting-Bregman method has been proposed to solve the nonlocal TV image restoration problem, but the efficiency is unsatisfactory [9, 10]. It not only needs the outer iteration in the subproblem but also the inner iterations for the nonlocal Laplacian operator. Zhu et al. propose an efficient primal-dual hybrid gradient algorithm, which alternates between the primal and dual formulations for total variation [11]. A unified primal-dual algorithm framework is proposed to resolve the local total variation problem with L 1 basis pursuit and TV-L 2 minimization [12]. Bonettini et al. establish the convergence of a general primal-dual method for nonsmooth convex optimization problems, whose structure is typical in the imaging framework [13]. In these approaches, many parameters have to be chosen and causes time consuming. To overcome this drawback, an alternating direction minimization method of multipliers (ADMM) has been widely used in recent image-processing tasks [14, 15]. Its outstanding performance is that there is no need to resolve the subproblems and no inner iterations. Hence, the problem needs to be solved from two aspects: one is how to choose a good regularization functional φ(f), which is an active research area in image science, and the other is how to shorten the computation time without yielding staircase artifacts, which is also a challenging problem.

The rest of the paper is organized as follows. In Section 2, we introduce the definition of the nonlocal total variation and the principle of the overlapping group sparsity and the ADMM algorithm. They are the essential tools in our method. Section 3 introduces the object function of the proposed model and discusses the parameter selection criteria. In Section 4, we carry out experiments and compare ours with other state-of-the-art methods. Finally, we make a conclusion in Section 5.

2 Preliminaries

2.1 Nonlocal total variation

Firstly, we give the following notations that will be used in this paper. Assuming the size of images in this paper is m × n, and the image matrix rows from stacking up are denoted as mn vectors. Denote the Euclidean space R mn as V and define Q = V × V. The ith components of xV and yQ are denoted as x i R and \( {y}_i={\left({y}_i^{(1)},{y}_i^{(2)}\right)}^{\mathrm{T}}\in {R}^2 \), respectively. Inner products and Euclidean norms are defined as

$$ {\left\langle x,x\right\rangle}_V=\sum_i^{mn}{x}_i{x}_i,\kern0.5em {\left\Vert x\right\Vert}_2=\sqrt{{\left\langle x,x\right\rangle}_V} $$
$$ {\left\langle y,y\right\rangle}_Q=\sum_i^{mn}\sum_{j=1}^2{y}_i^{(j)}{y}_i^{(j)},\kern0.5em {\left\Vert y\right\Vert}_2=\sqrt{{\left\langle y,y\right\rangle}_Q} $$
(5)

D (1), D (2)R mn × mn are mn × mn gradient matrices in the vertical and horizontal directions, and we have Df i  = [(D (1) f i ), (D (2) f i )]T for each fV. By staking the ith rows of D (1) and D (2) together, we get a tow-row matrix Df i R 2 × mn. Define the global first-order finite difference operator as D = [(D (1))T, (D (2))T]TR 2mn × mn. We consider DfQ and assume images in this paper under the periodic boundary condition. The discrete gradient operators are defined by \( {\left({D}_{(1)}f\right)}_{i,j}=\left\{\begin{array}{c}\hfill {f}_{i+1,j}-{f}_{i,j}\kern0.75em \mathrm{if}\kern0.75em i<m\hfill \\ {}\hfill {f}_{i+1,j}-{f}_{m,j}\kern0.5em \mathrm{if}\kern0.75em i=m\hfill \end{array}\right. \) and \( {\left({D}_{(2)}f\right)}_{i,j}=\left\{\begin{array}{c}\hfill {f}_{i,j+1}-{f}_{i,j}\kern0.75em \mathrm{if}\kern0.75em j<n\hfill \\ {}\hfill {f}_{i,1}-{f}_{i,n}\kern1.5em \mathrm{if}\kern0.75em j=n\hfill \end{array}\right. \).

Then, we use the definitions and notations of the local total variation introduced in [16]. Let ΩR 2 and xΩ, u(x) is a real function Ω → R and ω is a non-negative symmetric weight function, i.e., ω(x, y) = ω(y, x). The local gradient ω u(x) is defined as the vector of all partial differences ω u(x, ) at x:

$$ {\nabla}_{\omega }u\left(x,y\right)=\left(u(y)-u(x)\right)\sqrt{\omega \left(x,y\right)}, $$
(6)

where ω(x, y) is the weight function between x and y defined based on the image u. The graph divergence div ω of a vector p : Ω × Ω → R can be defined as

$$ {div}_{\omega }p(x)={\int}_{\varOmega}\left(p\left(x,y\right)-p\Big(y,x\Big)\right)\sqrt{\omega \left(x,y\right)} dy, $$
(7)

The weight function is defined as the nonlocal means weight function:

$$ \omega \left(x,y\right)=\exp \left\{-\frac{G_a\ast {\left\Vert f\left(x+\cdot \right)-f\left(y+\cdot \right)\right\Vert}^2}{2{h}^2}\right\}, $$
(8)

where G a is the Gaussian kernel with standard deviation a, h is the filtering parameter related to the standard variance of the noise, and the in f(x + ) denotes a square patch centered by point x. When the reference image f is known, the nonlocal means filter is a linear operator. Now, we can define the nonlocal TV norm with the isotropic L 1 norm of the weight graph gradient ω u(x):

$$ {TV}_{\omega }(u)={\int}_{\varOmega}\left|{\nabla}_{\omega }u(x)\right| dx={\int}_{\varOmega}\sqrt{\int_{\varOmega }{\left(u(y)-u(x)\right)}^2\omega \left(x,y\right) dydx}. $$
(9)

The main purpose of the nonlocal regularization is to generalize the local gradient and divergence concepts into the local form. Generally, a reference image is expected as close as possible to the original image to obtain the weights more exactly. However, it is hard to get the price weights between the pixels because the original image is degraded in the image formation process. Thus, the weights have to be calculated according to a preprocessed image. In this paper, the degraded image is preprocessed by an image smoothing scheme, which is mentioned to optimize the L 0 norm of the image gradient.

2.2 Overlapping sparsity prior

The sparsity-based regularization has obtained promising results for various ill-posed image restoration problems. Group sparsity concept was first used in the one-dimension denoising problem [17, 18]. Considering that groups of large values may arise anywhere in the signal domain, a group of large values may straddle two of the predefined groups, especially in general signal denoising and restoration problem. Hence, if the group structure is treated as a prior, it is suitable to formulate the problem into overlapping groups. And it is natural to extend the overlapping group sparsity prior to solve the two-dimension problem such as image restoration. It has been used as a penalty term for TV models and proven to be effective for alleviating staircase effect [15].

In [15], the vector sR n with a k-point group has been defined as

$$ {s}_{i,k}=\left[s(i),\cdots, s\left(i+k-1\right)\right]\in {R}^k $$
(10)

Here, s i, k denotes a block of k contiguous samples of s starting from the ith index. A group sparsity regularizer is defined as

$$ \zeta (s)=\sum_{i=1}^n{\left\Vert {s}_{i,k}\right\Vert}_2 $$
(11)

For the two-dimensional case, a k × k point group of the image fR n × n is defined as

$$ {\tilde{f}}_{i,j,k}=\left[\begin{array}{cccc}\hfill {f}_{i-{m}_1,j-{m}_1}\hfill & \hfill {f}_{i-{m}_1,j-{m}_1+1}\hfill & \hfill \cdots \hfill & \hfill {f}_{i-{m}_1,j+{m}_2}\hfill \\ {}\hfill {f}_{i-{m}_1+1,j-{m}_1}\hfill & \hfill {f}_{i-{m}_1+1,j-{m}_1+1}\hfill & \hfill \cdots \hfill & \hfill {f}_{i-{m}_1+1,j+{m}_2}\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill {f}_{i+{m}_2,j-{m}_1}\hfill & \hfill {f}_{i+{m}_2,j-{m}_1+1}\hfill & \hfill \cdots \hfill & \hfill {f}_{i+{m}_2,j+{m}_2}\hfill \end{array}\right]\in {R}^{k\times k}. $$
(12)

By stacking the k columns of the matrix \( {\tilde{f}}_{i,j,k} \), i.e., \( {f}_{i,j,k}={\tilde{f}}_{i,j,k}\left(:\right) \), a vector is obtained and the overlapping group sparsity (OGS) functional of a two-dimensional array can be defined as

$$ {\varphi}_{OGS}(f)=\sum_{i,j=1}^n{\left\Vert {f}_{i,j,k}\right\Vert}_2. $$
(13)

The regularization term ϕ(f) based on the image gradient in the vertical and horizontal direction is denoted as

$$ \phi (f)=\phi \left({D}_{(1)}f\right)+\phi \left({D}_{(2)}f\right). $$
(14)

It is a special case that is commonly mentioned as anisotroapic TV functional if k = 1, and the usual TV regularization term is defined as

$$ {\varPhi}_{TV}(f)=\sum_{i,j=1}^n\left\Vert {\left(\nabla f\right)}_{i,j}\right\Vert, $$
(15)

where the discrete gradient operator  : R mn → R 2 × mn is defined by (f) i, j  = ((D (1) f) i, j , (D (2) f) i, j ).

2.3 ADMM

ADMM is a special splitting case of the augmented Lagrangian method by splitting the complex problem into simpler subproblems, which can be easily solved by efficient operators, such as DFT and shrinkage operator. It also can take advantage of separable structures of the split object functions, which allow a straightforward treatment of various regularize terms, such as total variation regularization [19]. The ADMM algorithm resolves a linear system like a matrix transformation that makes the problem two-sided. On the one hand, the transformed matrix is related to the Hessian transform of the objective function carrying the second-order information. This fact meets the excellent performance of computational efficiency, which has been proven to be faster than the classical iterative shrinkage thresholding (IST) algorithms [20], even than their improved versions [21]. On the other hand, due to the typical huge size of the inversion, it is limited to resolve the problem that can be handled efficiently using some particular structure. In this paper, we use the fast Fourier transform (FFT) to improve the efficiency of ADMM. The convergence is guaranteed by the classical ADMM theory in literatures [22, 23]. In this subsection, we briefly review its basic theory for an intuitive understanding.

The ADMM theory was proposed to solve the optimization problem with the following constrained separable subproblem

$$ \min\ {y}_1\left({x}_1\right)+{y}_2\left({x}_2\right);\kern0.5em \mathrm{s}.\mathrm{t}.{A}_1{x}_1+{A}_2{x}_2=a $$
(16)

where x i X i , i = 1, 2, y i  : X i  → R are closed convex function, \( {X}_i\in {R}^{m_i} \) are nonempty closed convex sets, \( {A}_i\in {R}^{l\times {m}_i} \) are linear transforms, and aR l is a given vector. With qR l as a Lagrange multiplier to the linear constraint, the augmented Lagrangian function for the problem (16) is

$$ L\left({x}_1,{x}_2,q\right)={y}_1\left({x}_1\right)+{y}_2\left({x}_2\right)+{q}^T\left({A}_1{x}_1+{A}_2{x}_2-a\kern0.5em \right)+\frac{\delta }{2}{\left\Vert {A}_1{x}_1+{A}_2{x}_2-a\ \right\Vert}_2^2 $$
(17)
figure a

Here, δ is the penalty parameter that controls the linear constraint. According to the theory of the ADMM, optimal solutions is equivalent to finding a saddle point \( L\left({x}_1^{\ast },{x}_2^{\ast },{q}^{\ast}\right) \) by the alternative minimizing scheme, such as keeping x 2 and q fixed when minimizing L with respect to x 1. Then, we obtain the following ADMM iterative minimizing algorithm:

Iterative strategy of two subproblems is in the Gauss-Seidel fashion and thus the variables x 1 and x 2 can be solved separately in the alternating order. In [24], Eckstein and Bertsekas demonstrated that ADMM could be interpreted as an application of the proximal point algorithm. Meanwhile, a convergence result was proved for ADMM that allowed approximate computation of \( {x}_1^{k+1} \) and \( {x}_2^{k+1} \). Here, we restate their result as it applies to (14) under slightly weaker assumptions and in the case without over or under relaxation factors.

Theorem 2.1 (Eckstein, Bertsekas [24]) Consider the problem (13) where y 1 and y 2 are closed proper convex functions, A 1 has full column rank and y 2(x 2) + A 2 x 22 is strictly convex. Let q, x 2R be arbitrary and δ > 0. Suppose we are given sequences {μ k } and {v k } such that μ k  ≥ 0, v k  ≥ 0, \( {\sum}_{k=0}^{\infty }{\mu}_k<\infty \), and \( {\sum}_{k=0}^{\infty }{v}_k<\infty \). Suppose that

$$ \left\{\begin{array}{l}\left\Vert {x}_1^{k+1}-\underset{x_1}{\arg \min }{y}_1\left({x}_1\right)+\left\langle {q}^k,-{A}_1{x}_1\right\rangle +\frac{\delta }{2}{\left\Vert {A}_1{x}_1+{A}_2{x}_2^k-a\right\Vert}^2\right\Vert \le {\mu}_k\\ {}\left\Vert {x}_2^{k+1}-\underset{x_2}{\arg \min }{y}_2\left({x}_2\right)+\left\langle {q}^k,-{A}_2{x}_2\right\rangle +\frac{\delta }{2}{\left\Vert {A}_1{x}_1^{k+1}+{A}_2{x}_2-a\right\Vert}^2\right\Vert \le {\nu}_k\\ {}{q}^{k+1}={q}^k+\delta \left({A}_1{x}_1^{k+1}+{A}_2{x}_2^{k+1}-a\right)\end{array}\right. $$

If there exists a saddle point of L(x 1, x 2, q) in Eq. (14), then \( {x}_1^k\to {x}_1^{\ast } \), \( {x}_2^k\to {x}_2^{\ast } \), and q k → q , where \( \left({x}_1^{\ast },{x}_2^{\ast },{q}^{\ast}\right) \) is such a saddle point. On the other hand, if no such saddle point exists, then at least one of the sequences {μ k } or {v k } must be unbounded.

3 Proposed model and numerical algorithm

3.1 Image region division

The image is preprocessed by an image smoothing scheme same as [25], take experiments on “Babara” as an example. As shown in Fig. 1, Fig. 1a is the original image, Fig. 1b is the salient edges and constant regions of (a), and Fig. 1c is the details of (a) and obtained by (a) minus (b). Fig. 1d is the blurred image and the blur kernel is shown in the lower left, with the size 19 × 19. Figure 1e and (f) are edges and constant regions, and details of (d), respectively. Comparing the extracted edges and constant regions in Fig. 1b, e, we cannot see much obvious difference, although the blurred image (d) is seriously damaged. Meanwhile, the details of the blurred image shown in (f) are more clear than those of the original image (a), from the point of view.

Fig. 1
figure 1

Image regions division. a Original image. b Edges and constant regions of (a). c Details of (a). d Blurred image(PSNR = 19.24). e Edges and constant regions of (d). f Details of (d)

3.2 Novel nonlocal total variation

To overcome the drawback of the local TV model, researchers have proposed to combine the nonlocal TV and the TV model to resolve some image processing tasks. Tang et al. [26] combined a local TV filter and the nonlocal means algorithm for image denoising. In his work, a local TV filter was used only for the rare patches, such as special edges and rare detail patches, and the nonlocal means algorithm was used for the rest. However, the model is limited to image deblurring because the image structures have been severely damaged and the similar structures cannot be found accurately. Inspired, we apply the nonlocal TV regularization only for the image details to protect the detail information and the overlapping group sparsity prior for the sparser image representation constraint. Especially in our previous work [27], it has been proven the performance of using the overlapping group sparsity prior in suppressing the staircase effect.

Now, a novel nonlocal TV based image restoration model is defined as follows:

$$ \underset{f_S,{f}_D}{\min}\left\{\frac{\lambda }{2}{\left\Vert h\ast {f}_S+h\ast {f}_D-g\right\Vert}_2^2+\alpha \left\Vert \phi \left({D}_{(1)}{f}_S\right)+\phi \left({D}_{(2)}{f}_S\right)\right\Vert +\left\Vert {\nabla}_{\omega }{f}_D\right\Vert \right\}, $$
(18)

where λ and α are regularization parameters. The original image f is divided as f S and f D by the above mentioned gradient extraction scheme. f S denotes the salient edges and constant regions, and f D denotes the image details. In order to apply the ADMM idea, we introduce auxiliary variables v 1 = D (1) f, v 2 = D (2) f, and z = f. Compared with the standard ADMM algorithm in Eq. (16), the problem (18) satisfies the proposed framework with the following specification:

  1. (1)
    $$ {x}_1\coloneq f,{x}_2\coloneq \left[{v}_1,{v}_2,z\right]; $$
  2. (2)

    \( {y}_1\left({x}_1\right)=\frac{\lambda }{2}{\left\Vert h\ast {f}_S+h\ast {f}_D-g\right\Vert}_2^2 \), y 2(x 2) = αϕ(D (1) f S ) + ϕ(D (2) f S ) +  ω f D .

Then, the Augmented Lagrangian function of the minimization problem (18) is defined as follows:

$$ {\displaystyle \begin{array}{l}{L}_A\left(f,v,z;\varepsilon, \eta \right)=\sum \limits_i\left\{\frac{\lambda }{2}{\left\Vert h\ast {f}_S+h\ast {f}_D-g\right\Vert}_2^2-\varepsilon \left(z-{f}_S\right)+\frac{\beta_1}{2}{\left\Vert z-{f}_S\right\Vert}_2^2\right\}\\ {}\kern1em -\left.\eta \left(v-{Df}_S\right)+\frac{\beta_2}{2}{\left\Vert v-{Df}_S\right\Vert}_2^2+\alpha \left\Vert \phi \left({D}_{(1)}{f}_S\right)+\phi \left({D}_{(2)}{f}_S\right)\right\Vert +\left\Vert {\nabla}_{\omega }{f}_D\right\Vert \right\}.\end{array}} $$
(19)

where β 1, β 2 > 0 are linear constraints corresponding to the auxiliary variables z and v, and ε and η are the Lagrange multipliers. Starting at f = f k, ε = ε k, and η = η k, applying ADMM yields the iterative scheme

$$ \left(\begin{array}{c}\hfill {v}^{k+1}\hfill \\ {}\hfill {z}^{k+1}\hfill \end{array}\right)\leftarrow \underset{z\in \varOmega, v}{\arg \min }{L}_A\left({f}^k,v,z;{\varepsilon}^k,{\eta}^k\right), $$
(20)
$$ {f}^{k+1}\leftarrow \underset{f}{\arg \min }{L}_A\left(f,{v}^{k+1},{z}^{k+1};{\varepsilon}^k,{\eta}^k\right), $$
(21)
$$ \left(\begin{array}{c}\hfill {\varepsilon}^{k+1}\hfill \\ {}\hfill {\eta}^{k+1}\hfill \end{array}\right)\leftarrow \left(\begin{array}{c}\hfill {\varepsilon}^k-{\gamma \beta}_2\left({v}^{k+1}-{Df}^{k+1}\right)\hfill \\ {}\hfill {\eta}^k-{\gamma \beta}_1\left({z}^{k+1}-{f}^{k+1}\right)\hfill \end{array}\right). $$
(22)

As the theoretical analysis in [22], positive values of the parameters β 1, β 2, and γ can ensure the convergence of ADMM, and we will be set them to specified values in the later experiments.

3.3 Numerical algorithm

According to the ADMM in Algorithm 1, we employ the alternative optimization to estimate the divided image regions f S and f D .

  • f D  being fixed, we search for \( {f}_S^{k+1} \) as a solution of

$$ {\displaystyle \begin{array}{l}\left({f}_S^{k+1},{v}^{k+1}\right)=\arg \min \left\{\frac{\lambda }{2}{\left\Vert h\ast {f}_S+h\ast {f}_D-g\right\Vert}_2^2\right.+\frac{\gamma }{2}\left({\left\Vert {D}_{(1)}{f}_S-{v}_{(1)}^k+\frac{\eta_1^k}{\gamma}\right\Vert}_2^2\right.\\ {}\kern4.5em +\kern0.5em \left.\left.{\left\Vert {D}_{(2)}{f}_S-{v}_{(2)}^k+\frac{\eta_2^k}{\gamma}\right\Vert}_2^2+{\left\Vert {f}_S-{z}^k+\frac{\varepsilon^k}{\gamma}\right\Vert}_2^2\right)+\alpha \left\Vert \phi \left({D}_{(1)}{f}_S\right)+\phi \left({D}_{(2)}{f}_S\right)\right\Vert \right\}\end{array}} $$
(23)

The minimization of Eq.(23) with respect to f S is a least square problem and can be resolved by the fast Fourier transform (FFT), which only requires O(n log(n)) arithmetic operations, here n represents the size of the given image. The corresponding normal equation of Eq.(23) is

$$ {\displaystyle \begin{array}{l}\left[{H}^TH+\frac{\gamma }{\lambda}\left({D}_{(1)}^T{D}_{(1)}+{D}_{(2)}^T{D}_{(2)}+I\right)\right]{f}_S^{k+1}={H}^Tg\\ {}\kern2.75em +\frac{\gamma }{\lambda}\left[{D}_{(1)}^T\left({v}_1^k-\frac{\eta_1^k}{\gamma}\right)+{D}_{(2)}^T\left({v}_2^k-\frac{\eta_2^k}{\gamma}\right)+{z}^k-\frac{\varepsilon^k}{\gamma}\right]\end{array}} $$
(24)

where H denotes the blurring matrix formed by h. Considering v 1 and v 2 with the same expression, the subproblem corresponds to the following optimization problem

$$ {v}_i^{k+1}=\underset{v_i}{\arg \min}\left\{\frac{\gamma }{2}{\left\Vert {v}_i-\left({D}_{(i)}{f}_S^{k+1}+\frac{\eta_i^k}{\gamma}\right)\right\Vert}_2^2+\alpha \phi \left({v}_i\right)\right\},i=1,2. $$
(25)

which can be attacked with the extremely fast shrinkage formula and obtain

$$ {v}_i^{k+1}=\frac{\left({D}_{(i)}{f}_S^{k+1}+{\eta}_i^k\right)}{\left|{D}_{(i)}{f}_S^{k+1}+{\eta}_i^k\right|}\cdot \max \left(\left|{D}_{(i)}{f}_S^{k+1}+{\eta}_i^k\right|-\frac{\lambda }{\gamma },0\right),i=1,2. $$
(26)
  • \( {f}_S^{k+1} \) being fixed, we obtain \( {f}_D^{k+1} \):

$$ {f}_D^{k+1}=\underset{f_S}{\arg \min}\left\{\frac{\lambda }{2}{\left\Vert h\ast {f}_S^{k+1}+h\ast {f}_D-g\right\Vert}_2^2+\left\Vert {\nabla}_{\omega }{f}_D\right\Vert \right\}. $$
(27)

It is a classical nonlocal TV image deblurring problem. Here, we refer to a fast nonlocal TV deconvolution algorithm in [28]. The image smoothing scheme should be performed at each iteration and \( {f}_S^{k+1} \) is used as a reference image in Eq.(27), which is helpful to improve the accuracy of the algorithm. Thus, our proposed method is summarized as follows:

figure b

4 Results and discussion

4.1 Experimental settings

In order to demonstrate the viability and efficiency of the proposed method, experiments are carried out on various image and kernels and compared with other state-of-the-art methods. Test images are some classical images, such as “Boat” with the size 128 × 128, “Cameraman” with the size 256 × 256, “Lena” with the size 512 × 512, “Babara” with the size 512 × 512, and “Man” with the size 1024 × 1024. And five different motion blur kernels are used in the experiments. They are freely available from http://www.wisdom.weizmann.ac.il/~levina/ and frequently used in present researches for image restoration, as shown in Fig. 2a. The size of each kernel is 13 × 13, 17 × 17, 19 × 19, 21 × 21, and 23 × 23, as shown in Fig. 2b. We also try the average blur kernel and add the Gaussian noise to degrade the image. The stop criterion is f m + 1 − f m2/f m2 < 0.001. The PSNR is used to evaluate the quality of recovery results. All experiments are conducted by Matlab programming on a desktop PC with 2.3GHz Interl Core computer and 4.0 GB memory.

Fig. 2
figure 2

Dataset for the numerical experiments. a Test images (from 128 × 128 to 1024 × 1024). b Blur Kernels (from 13 × 13 to 23 × 23)

4.2 Parameters setting

The PSNR is defined as

$$ PSNR=10\lg \frac{255^2 MN}{\sum_{i=1}^M\sum_{j=1}^N{\left[\widehat{f}\left(i,j\right)-f\Big(i,j\Big)\right]}^2}, $$
(28)

where \( \widehat{f}\left(i,j\right) \) denotes the recovered image and f(i, j) denotes the true image. As mentioned, the fast Fourier transforms in the algorithm, boundary problems needs to be processed; otherwise, it will affect the PSNR seriously. The “edgetaper” function in Matlab is chosen as the periodic boundary condition. There are many auxiliary variables and coefficients in the object formulation that the form seems very complicated, as shown in Eq.(19) and (23), especially in the algorithm description and theoretical analysis. In actual fact, auxiliary variables are helpful in programming and reducing computational redundancy.

Considering the penalty parameters β 1,β 2, and γ, theoretically, any positive values ensure the convergence of ADMM. In practice, there are usual two ways to determine them. One tries some values and picks a certain value with satisfactory performance; the other applies self-adaptive adjustment rules with an arbitrary initial guess but requires expensive computation. In our experience, a well-tuned constant value has the same performance to the value obtained by the self-adaptive strategy. For our proposed model, we tune and set β 1 = 0.025 and β 2 = 0.05. Parameters λ = 0.04, α = 0.05, and γ = λ/3 = 0.012 are chosen based on the experience of our previous work; for more details, refer to. In this subsection, we mainly focus on the iterative parameter k and the window of nonlocal TV operator. As shown in Fig. 1, two curves are the numerical analysis of the recovery for “blurred Babara” image (512 × 512) in Fig. 1d by our proposed method. Figure 3a shows that with the iterations increasing, the objective function value is getting smaller and the algorithm tends to be stable. Figure 3b shows that PSNR has not been significantly improved as the number of iterations increasing to 50. Hence, in the following experiments, we set the number of iterations k = 50.

Fig. 3
figure 3

Numerical analysis for iterations k. a Curve of object function cost to iterations. b Curve of PSNR to iterations

Then, the blurred image in Fig. 1d is still used to discuss the weight update for nonlocal operator. An appropriate size of the search window can guarantee the better PSNR and less CPU time. The window size is initially set to 5 × 5 and increases by two steps. We also add the date with the window size 3 × 3. Computation time and PSNR are recorded at the same time, and the results are shown in Fig. 4. Figure 4a shows that as the size becomes larger, the computing time will be longer. However, PSNR has not been greatly improved in Fig. 4b. This conclusion is similar to ref. [8], which also discusses the size of the search window. Particularly, although there are obvious difference in PSNR values when the size of search window is set to 9 × 9 and 11 × 11, the CPU time becomes longer. In order to shorten the CPU times and obtain a reasonable PSNR, we use 7 × 7 search window for the following numerical experiments.

Fig. 4
figure 4

Analysis for search window Ω ω (x) of the nonlocal TV operator. a Curve of CPU time to window size. b Curve of PSNR to window size

4.3 Comparison with other state-of-art methods

In this subsection, we use above images and blur kernels in Fig. 3 to verify the efficiency of the proposed model and compare with four state-of-the-art methods. They are the nonlocal TV method [5], the overlapping group sparsity TV method [15], the combination of local and nonlocal method using a Bregmanized operator splitting iterative scheme [26], and the nonlocal TV-based method using a linearized proximal alternating minimization algorithm [8]. Numerical results of the PSNR and CPU times are reported in the Table 1. It can be noted that our method obtains higher PSNR and shorter CPU times than other methods.

Table 1 PSNR (dB) and CPU times for the deblurring experiments in Fig. 4 (PSNR/times)

Because our method has the image division step in each iteration, therefore, it does not significantly shorten the computing time. As a compromise, we achieve a better recovery effect that suppresses staircase effects and protects image details and edges. Experiments on the blurred “Babara” image in Fig. 1d are shown in Fig. 5, the avatar area in the Babara image is cropped and the details are more clear. Figure 5a is the blurred image by a 19 × 19 blur kernel. Figure 5b is the recovered image by the local TV method, which cause obvious staircase effects. In Fig. 5c, the staircase effect is alleviated but unsatisfactory. In Fig. 5d, more details are restored but consumes too long time. In Fig. 5e, the method effectively saves the computing time and get a relatively satisfactory recovery effect. Our result is shown in Fig. 5f, by comparing the pattern on the scarf, it can be drawn that our method can suppress staircase effects and preserve more details.

Fig. 5
figure 5

Experiments on the blurred image “Babara” (degrade by a 19 × 19 kernel). a Blurred image(PSNR = 18.17). b The local TV [5] (PSNR = 20.12, t = 6.36 s). c GOS_TV [15] (PSNR = 21.38, t = 8.85 s). d The local/nonlocal TV [26] (PSNR = 22.89, t = 26.77 s). e The nonlocal TV [8] (PSNR = 23.34, t = 14.32 s). f The proposed model (PSNR = 24.93, t = 13.03 s)

We also test on the blurred and noisy image, as shown in Figs. 6 and 7. The image “Cameraman” with the size 256 × 256 is degraded by a 9 × 9 average kernel and Gaussian white noise with σ = 3, as shown in Fig. 6a. In Fig. 6b, there are serious staircase effects by the local TV method, which shows the drawbacks of the total variation-based framework. In Fig. 6c, the noise is removed but the staircase effect is still very obvious. In Fig. 6d, the staircase effect is mitigated but consumes too long time, which is attributed to the numerical processing of the algorithm. By comparing Fig. 6e, f, we can note that our method obtain better result and save more computing time.

Fig. 6
figure 6

Experiments on the blurry and noisy image of “Cameraman” (256 × 256). a Blurry and noisy (PSNR = 21.22). b The local TV [5] (PSNR = 22.45, t = 3.7 s). c GOS_TV [15] (PSNR = 23.09, t = 5.7 s). d The local/nonlocal TV [26] (PSNR = 24.57, t = 10.5 s). e The nonlocal TV [8] (PSNR = 25.69, t = 6.3 s). f The proposed model (PSNR = 27.69, t = 5.2 s)

Fig. 7
figure 7

Experiments on the blurry and noisy image of “Lena” (512 × 512). a Blurry and noisy (PSNR = 19.31). b The local TV [5] (PSNR = 21.22, t = 4.32 s). c GOS_TV [15] (PSNR = 23.09, t = 10.93 s). d The local/nonlocal TV [26] (PSNR = 24.57, t = 20.21 s). e The nonlocal TV [8] (PSNR = 25.69, t = 14.34 s). f The proposed model(PSNR = 27.53, t = 13.02 s)

Figure 7 shows the recovery results of the compared methods on classical tested image “Lena”. The feathers on her hat are of interest to our observation and comparison. As shown in Fig. 7a, the blurry and noisy image of “Lena” with the size 512 × 512 is degraded by a 13 × 13 average kernel and Gaussian white noise with σ = 3. There are obvious staircase effects in Fig. 7b, c. In Fig. 7d, the method also divide the image region by a gradient extraction scheme, which can preserve more details but costs much more time. In Fig. 7e, the method is proposed based on the nonlocal TV and uses a linearized proximal alternating minimization algorithm to improve the efficiency. Our result in Fig. 7f shows higher PSNR and shorter CPU time.

5 Conclusions

In this paper, a novel local and nonlocal total variation combination method has been proposed for image restoration in WSN. To apply the information properly, the image is divided into two regions by an image smoothing scheme. The local TV term is applied on the salient edges and constant regions, and the nonlocal term is applied on the details. The overlapping group sparsity is adopted as a priori constraint term in the proposed model to alleviate the staircase effect as much as possible. To improve the efficiency, we optimize the energy function by the ADMM algorithm, which has complex formulas but is easy to be programmed. Parameter selection criterion for two key parameters is discussed by numerical experiments, and it is the other main contribution. By comparing with other state-of-the-art methods, it can be concluded that our method achieves higher efficiency and makes a good balance between alleviating staircase effects and preserving image details.

References

  1. Q Liang, X Cheng, SC Huang, D Chen, Opportunistic sensing in wireless sensor networks: theory and application. IEEE Trans. Comput. 63(8), 2002–2010 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  2. F Zhao, L Wei, H Chen, Optimal time allocation for wireless information and power transfer in wireless powered communication systems. IEEE T. Veh. Technol 65(3), 1830–1835 (2016)

    Article  Google Scholar 

  3. F Zhao, H Nie, H Chen, Group buying spectrum auction algorithm for fractional frequency reuses cognitive cellular systems. Ad Hoc Netw. 58, 239–246 (2017)

    Article  Google Scholar 

  4. F Zhao, W Wang, H Chen, Q Zhang, Interference alignment and game-theoretic power allocation in MIMO heterogeneous sensor networks communications. Signal Process. 126, 173–179 (2016)

    Article  Google Scholar 

  5. L Mysaker, XC Tai, Iterative image restoration combining total variation minimization and a second-order functional. Int. J. Comput. Vis. 66(1), 5–18 (2006)

    Article  MATH  Google Scholar 

  6. D Chen, L Cheng, Alternative minimisation algorithm for non-local total variational image deblurring. IET Image Process. 4(5), 353–364 (2010)

    Article  Google Scholar 

  7. F Rousseau, A non-local approach for image super-resolution using intermodality priors. Med. Image Anal. 14(4), 594–605 (2010)

    Article  Google Scholar 

  8. S Yun, H Woo, Linearized proximal alternating minimization algorithm for motion deblurring by nonlocal regularization. Pattern Recogn. 44(6), 1312–1326 (2011)

    Article  MATH  Google Scholar 

  9. DH Jiang, X Tan, YQ Liang, et al., A new nonlocal variational bi-regularized image restoration model via split Bregman method. EURASIP J. Image Vide 1, 1–10 (2015)

    Google Scholar 

  10. J Liu, TZ Huang, XG Lv, et al., High-order total variation-based Poissonian image deconvolution with spatially adapted regularization parameter. Appl. Math. Model. 45, 516–529 (2017)

    Article  MathSciNet  Google Scholar 

  11. M Zhu, T Chan, An efficient primal-dual hybrid gradient algorithm for total variation image restoration (Ucla Cam Report, 2008)

  12. X Zhang, M Burger, S Osher, A unified primal-dual algorithm framework based on Bregman iteration. J. Sci. Comput. 46(1), 20–46 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. S Bonettini, V Ruggiero, On the convergence of primal–dual hybrid gradient algorithms for total variation image restoration. J. Math. Imaging Vis 44(3), 236–253 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  14. ZJ Bai, D Cassani, M Donatelli, et al., A fast alternating minimization algorithm for total variation deblurring without boundary artifacts. J. Math. Anal. Appl. 415(1), 373–393 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  15. J Liu, TZ Huang, IW Selesnick, et al., Image restoration using total variation with overlapping group sparsity. Inform. Sciences 295(C), 232–246 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  16. G Gilboa, S Osher, Nonlocal operators with applications to image processing. Multiscale Model. Sim 7(3), 1005–1028 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  17. W Dong, L Zhang, G Shi, et al., Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE T. Image Process 20(7), 1838–1857 (2011)

    Article  MathSciNet  Google Scholar 

  18. WZ Shao, HS Deng, Q Ge, et al., Regularized motion blur-kernel estimation with adaptive sparse image prior learning. Pattern Recogni 51, 402–424 (2016)

    Article  Google Scholar 

  19. MS Almeida, M Figueiredo, Deconvolving images with unknown boundaries using the alternating direction method of multipliers. IEEE T. Image Process 22(8), 3074–3086 (2013)

    Article  MathSciNet  Google Scholar 

  20. I Daubechies, M Defrise, CD Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pur. Appl. Math 57(11), 1413–1457 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  21. SJ Wright, RD Nowak, M Figueiredo, et al., Sparse reconstruction by separable approximation. IEEE Press 57(7), 2479–2493 (2009)

    MathSciNet  Google Scholar 

  22. W Deng, W Yin, On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66(3), 889–916 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  23. B He, H Yang, Some convergence properties of a method of multipliers for linearly constrained monotone variational inequalities. Oper. Res. Lett. 23(3), 151–161 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  24. J Eckstein, DP Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1–3), 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  25. L Xu, C Lu, Y Xu, J Jia, Image smoothing via L0 gradient minimization. Acm T.Graphic 30(6), 1–12 (2011)

    Google Scholar 

  26. S Tang, W Gong, W Li, et al., Non-blind image deblurring method by local and nonlocal total variation models. Signal Process. 94(1), 339–349 (2014)

    Article  Google Scholar 

  27. M Shi, T Han, S Liu, Total variation image restoration using hyper-Laplacian prior with overlapping group sparsity. Signal Process. 126, 65–76 (2015)

    Article  Google Scholar 

  28. X Zhang, M Burger, X Bresson, et al., Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. Siam J. Imaging Sci 3(3), 253–276 (2010)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thank the reviewer for helping us to improve this paper. This work is supported by the National Science Foundation of China (Grant No.61501328) and Doctoral Found of Tianjin Normal University (Grant No.52XB1406).

Funding

Funding was from the National Science Foundation of China (Grant No.61501328) and Doctoral Found of Tianjin Normal University (Grant No.52XB1406).

Author information

Authors and Affiliations

Authors

Contributions

MS proposes the innovation ideas and theoretical analysis, and LF carries out experiments and data analysis.

Corresponding author

Correspondence to Mingzhu Shi.

Ethics declarations

Consent for publication

Written informed consent was obtained from the patient for the publication of this report and any accompanying images.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, M., Feng, L. A novel local and nonlocal total variation combination method for image restoration in wireless sensor networks. J Wireless Com Network 2017, 167 (2017). https://doi.org/10.1186/s13638-017-0951-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-017-0951-y

Keywords