Skip to main content

Advertisement

Non-uniform illumination endoscopic imaging enhancement via anti-degraded model and L 1 L 2-based variational retinex

Abstract

In this paper, we propose a novel image enhancement algorithm via anti-degraded model and L 1 L 2-based variational retinex (AD-L 1 L 2VR) for non-uniform illumination endoscopic images. Firstly, a haze-free endoscopic image is obtained by an anti-degraded model named dark channel prior (DCP). For getting a more accurate transmission map, it is refined by using a guided image filtering. Secondly, the haze-free endoscopic image is decomposed into detail and naturalness components by light filtering. Thirdly, a logarithmic Laplacian-based gamma correction (LLGC) is added to the naturalness component for preventing color cast and uneven lighting. Fourthly, we assume that the error between the detail component of the haze-free image and the product of associated reflectance and background illumination follows Gaussian-Laplacian distribution. So, the associated reflectance component can be obtained by using the proposed L 1 L 2-based variational retinex (L 1 L 2VR) model. Finally, the recombination of modified naturalness component and associated reflectance component become the final result. Experimental results demonstrate that the proposed algorithm reveals more details in the background regions as well as other interesting areas and can mostly prevent the color cast. It has a better performance on increasing diagnosis and reducing misdiagnosis than other existing enhancement methods.

Introduction

Nowadays, signal processing [1,2,3] and image processing [4,5,6] get more and more attention. Amongst them, medical image processing has been widely researched. Diseases of the gastrointestinal tract, such as bleeding, ulcer, and tumor, are threatening humans’ health. However, traditional diagnosis methods, such as barium meal examination, X-ray scanning, and CT, are invasive to the human body. After the invention of the endoscope, it is possible to generate color images directly inside the human body. In 2000, capsule endoscopy (CE) [4] was introduced, and it has been a useful tool for examining the entire gastrointestinal tract, especially for the screening of small-bowel diseases [5, 6]. However, endoscopic diagnosis is time consuming due to the great amount of the video data and low contrast image quality. Besides, the misdiagnosis rate may increase because of blurred edges and low contrast of images.

With the purpose of improving diagnostic detection rate, several techniques or devices have been proposed to optimize visualization [7]. Add-on devices [8], wide-angle colonoscopies [9,10,11,12], and balloon colonoscope [13] are the examples of advanced imaging devices which have been widely used to improve diagnostic yield. Color enhancement technique at the chip level or as a post-processing step is another method to increase the image quality and diagnostic yield [14]. The Fuji Intelligent Color Enhancement (FICE, Fujinon Inc.) system [15], narrow-band imaging (NBI) [16], I-scan [17], and retinex [18] are the examples of post-processing color enhancement algorithms which have been widely used [6, 19].

In order to diagnose disease successfully, we make the following three major contributions on color image enhancement technologies. First, preserving naturalness as much as possible because color is one of the most important bases for diagnosing pathology. Second, preventing the scattering caused by mucosa and digestive juice inside the human body. Third, more and more details should be displayed for improving the diagnostic rate.

Amongst the various enhancement methods, retinex has received much attention due to its simplicity and effectiveness in enhancing non-uniform illumination images [20]. To simulate the mechanism of HVS, it is an ill-posed problem that computes illumination or reflectance from a single observed image. In order to get more accurate results, many modified retinex methods have been proposed. Path-based retinex [21] methods are the simplest, but they usually necessitate high computational complexity. Jobson et al. had proposed the multi-scale retinex (MSR) [22, 23] algorithm and the color restored multi-scale retinex (CRMSR) [24] algorithm. Partial differential equation (PDE) was introduced to the retinex algorithm in 1974 [25]. However, when solving the Poisson equations, extra artifacts will be caused by the hard thresholding operator in PDE-based retinex algorithms. In 2011, a total variational retinex method (TVR) [26] was proposed. In 2014, a variational Bayesian model for retinex was proposed by Wang et al. [20].

However, the issue of atmospheric transmission is important but not considered in existing classical enhancement methods. In real endoscopic imaging scenes, images captured by endoscope will be influenced by the scattering and absorption of mucosa and digestive juice inside the human body. In order to overcome this drawback, a novel endoscopic imaging enhancement via anti-degraded model and L 1 L 2-based variational retinex (AD-L 1 L 2VR) is proposed in this paper. Before enhancing an observed image, an anti-degraded model named dark channel prior (DCP) is provided to get haze-free endoscopic images. Secondly, the haze-free endoscopic image will be decomposed into detail and naturalness components by light filtering, and these two parts will be discussed separately. Then, a logarithmic Laplacian-based gamma correction (LLGC) is added to the naturalness component for preventing color cast and uneven lighting. In addition, most retinex methods assume that the estimated error between observed image and the product of reflectance and background illumination is a random variable with a Gaussian distribution with zero mean and variance δ 2. The maximum likelihood estimation (MLE) solution of Gaussian distribution is equivalent to the solution of ordinary least squares (OLS). However, the OLS solution is sensitive to outliers although it is easy to solve. If the error is Laplacian distributed, the MLE solution is equivalent to the least absolute deviation (LAD) solution. Compared with the OLS method, the LAD method is robust to outliers. So, we assume that the error between the detail component of the haze-free image and the product of the associated reflectance and background illumination follows Gaussian-Laplacian distribution. So, the associated reflectance component will be obtained by using the proposed L 1 L 2-based variational retinex (L 1 L 2VR) model. Finally, the recombination of associated reflectance and naturalness component become the final result.

This paper is organized as follows: the optical model and the retinex model are described in Section 2. Section 3 gives the details of the proposed algorithm and the optimization strategy. Experimental results and evaluation are shown in Section 4. Discussion is shown in Section 5. Section 6 concludes the paper.

Background

Optical model

Figure 1 is a schematic of the bimodal endoscope hardware. When the endoscope diagnoses pathology, images captured by the endoscope will be more or less influenced by scattering and absorption phenomenon. In this section, we introduce an optical model that is widely used in computer vision for describing the light transmission process. Pictorial description of endoscopic imaging in the optical model is shown in Fig. 2. This model is defined as follows:

$$ {I}^c\left(i,j\right)={J}^c\left(i,j\right)\times t\left(i,j\right)+\left(1-t\left(i,j\right)\right)\times {A}^c, $$
(1)

where c is the cth color channel, I c(i, j) is the hazy image captured by the endoscope, J c(i, j) is haze-free image, A is global atmospheric light, and t(i, j) is the transmission map that describes the non-scattered light between the observed objects and the camera. The first term J c(i, j) × t(i, j) represents the direct attenuation; the second term (1 − t(i, j)) × A c represents the airlight.

Fig. 1
figure1

Schematic of the bimodal endoscope hardware

Fig. 2
figure2

Description of endoscopic imaging in the optical model

If the atmosphere is homogenous, the transmission t(x, y) can be expressed as follows:

$$ t\left(i,j\right)={e}^{-\beta d\left(i,j\right)}, $$
(2)

where β represents the scattering coefficient of the atmosphere and d(i, j) represents the scene depth between the endoscope and the diseased tissue.

Retinex model

When the endoscope diagnoses pathology, non-uniform illumination images are captured due to the hostile imaging environment inside the human body. Amongst the various enhancement methods, the retinex [27] algorithm has received much attention due to its simplicity and effectiveness in enhancing non-uniform illumination images. According to the retinex model, a given image can be decomposed into two parts: the illumination and the reflectance components. It is defined as follows:

$$ S=L\times R, $$
(3)

where S is the observed image and L and R are illumination and reflectance components respectively.

For easy calculation, Eq. (3) is usually converted into the logarithmic form, as shown:

$$ s=l+r, $$
(4)

where s = log(S), l = log(L), and r = log(R).

Method

Here, we propose a novel enhancement algorithm based on AD-L 1 L 2VR. The proposed approach consists of three major modules, an anti-degraded module, naturalness preserved module, and contrast enhancement module, as illustrated in Fig. 3. Firstly, the original endoscopic image captured by the camera is processed through DCP to get a haze-free image. Secondly, the haze-free endoscopic image is decomposed into the detail component and the naturalness component. The naturalness component is processed by LLGC for preventing color cast. The detail component is decomposed into reflectance and illumination components via L 1 L 2VR. Finally, the synthesis of reflectance and mapped naturalness components become the final output enhanced image.

Fig. 3
figure3

Flowchart of the proposed algorithm. a Original image. bd Dark channel, transmission map, and refined transmission map of (a). e Haze-free image of (a). f Naturalness component of (e). g Naturalness component with gamma correction. h Detail component of (e). i Reflectance component of (h). j Output image

Anti-degraded model

In this paper, the haze-free endoscopic image is captured by an anti-degraded module based on DCP. The dark channel prior was proposed by He et al. [28], which is a statistics of the haze-free outdoor images. The dark channel of the haze-free endoscopic image is defined as follows:

$$ {J}_{\mathrm{dark}}\left(i,j\right)=\underset{c\in \left\{\mathrm{r},\mathrm{j},\mathrm{b}\right\}}{\min}\left(\underset{\left(\mathrm{x},\mathrm{y}\right)\in {\Omega}_1}{\min}\left({J}^c\left(x,y\right)\right)\right), $$
(5)

where J c(x, y) represents the cth channel of J(x, y) and Ω1 represents the local patch centered at (i, j). According to [25], Eq. (1) is equivalent to:

$$ \underset{\left(x,y\right)\in {\Omega}_1}{\min}\left(\frac{I^c\left(x,y\right)}{A^c}\right)=\tilde{t}\left(i,\mathrm{j}\right)\underset{\left(x,y\right)\in {\Omega}_1}{\min}\left(\frac{J^c\left(x,y\right)}{A^c}\right)+\left(1-\tilde{t}\left(i,\mathrm{j}\right)\right). $$
(6)

According to the dark channel prior, the DCP of haze-free endoscopic image J is close to zero. So, the transmission map is defined as follows:

$$ \tilde{t}\left(\mathrm{i},\mathrm{j}\right)=1-\underset{\left(x,y\right)\in \Omega}{\min}\left(\underset{c\in \left\{r,g,b\right\}}{\min}\frac{I^c\left(x,y\right)}{A^c}\right). $$
(7)

The haze-free endoscopic image may contain some block artifacts by directly using the above transmission map. To overcome this drawback, the transmission map can be refined by guided filtering [29]. It can be described as follows:

$$ t\left(i,j\right)=\mathrm{GuideFilter}\left(\tilde{t}\left(i,j\right)\right). $$
(8)

Finally, because the direct attenuation term J c(i, j) × t(i, j) may be close to zero, so the transmission map should be restricted to a low bound t 0, it is set 0.1 empirically. So, the haze-free endoscopic image can be obtained as follows:

$$ {J}^c\left(i,j\right)=\frac{I^c\left(i,j\right)-{A}^c}{\max \left(t(x),{t}_0\right)}+{A}^c $$
(9)

Image decomposition based on light filtering

The haze-free endoscopic image is first decomposed into the detail and naturalness components that are discussed separately in the following steps. This decomposition processing can be described as follows:

$$ {\displaystyle \begin{array}{l}{J}^c={J}_d^c+{J}_n^c\\ {}{J}_d^c=\left(1-\alpha \right)\times {J}^c,\\ {}{J}_n^c=\alpha \times {J}^c\end{array}} $$
(10)

where J c is the haze-free image of the cth color channel, \( {J}_d^c \) and \( {J}_n^c \) are the detail and naturalness components of J c, and α is a weighting factor. Based on the assumption that naturalness is the local maxima for each pixel, α is defined as follows:

$$ \alpha =\frac{1}{2}\frac{\sum \limits_{\left(\mathrm{x},\mathrm{y}\right)\in {\Omega}_2}{J}^c\left(i,j\right)\times H\left({J}^c\left(x,y\right),{J}^c\left(i,j\right)\right)}{J_{\mathrm{max}}\times \sum \limits_{\left(\mathrm{x},\mathrm{y}\right)\in {\Omega}_2}H\left({J}^c\left(x,y\right),{J}^c\left(i,j\right)\right)}, $$
(11)

where Ω2 is a five-pixel square in four-connectivity in which J c(i, j) is the center pixel, and J c(i, j) is the maximum value of three color channels on the location J c(i, j).

After decomposition, the detail and naturalness components can be processed separately. The mapped naturalness component can be acquired by using LLGC. And reflectance can be obtained by the processing detail component via L 1 L 2VR.

Naturalness mapping using LLGC

Due to non-uniform illumination or scattering, images captured by endoscope camera may cause color cast. To overcome this problem, LLGC will be used based on the assumption that color variance is Laplacian-based distributed. The LLGC can be described as follows:

$$ {\displaystyle \begin{array}{l}{J}_n^m=W{\left(\frac{\log \left({J}_n+\varsigma \right)}{W}\right)}^{\gamma}\\ {}\gamma =\frac{Q\left({D}^{\mathrm{max}}\left(i,j\right)\left|{\Omega}_3\right.\right)}{Q\left({D}^{\mathrm{min}}\left(i,j\right)\left|{\Omega}_3\right.\right)}\\ {}Q\left({D}^c\left(i,j\right)\left|S\right.\right)=\frac{1}{N}\sum \limits_{\left(i,j\right)\in {\Omega}_3}\frac{1}{2b}\exp \left(-\frac{\left|{D}^c\left(i,j\right)-\mu \right|}{b}\right)\\ {}{D}^c\left(i,j\right)=\left|{J}^{\mathrm{max}}\left(i,j\right)-{J}^c\Big(i,j\Big)\right|\end{array}} $$
(12)

where W is the white value, ς is a small positive constant, Ω3 represents a region which is selected from the top 0.1% brightest values in its dark channel, and D(i, j) is the color difference. max and min represent maximum and minimum values, respectively, N is the number of pixels of region Ω3, and μ and b are the location and scale parameters, respectively of the Laplacian distribution.

Image decomposition via L 1 L 2VR

Computing illumination or reflectance from a single observed image is ill-posed. To solve this problem, many variational retinex models have been proposed. In this paper, the reflectance component is acquired via a L 1 L 2VR model with simultaneously estimating illumination and reflectance.

The retinex model can be described as S = R × L, where S represents the acquired detail component J d in this paper and R and L represent reflectance and background illumination components, respectively, of J d . According to Bayes’ theorem, the general physical model can be seen as a posterior distribution:

$$ p\left(R,L\left|S\right.\right)\propto p\left(\left.S\right|R,L\right)p(R)p(L), $$
(13)

where p(R, L|S) represents posterior distribution, p(S|R, L) represents the likelihood, p(R) represents prior probability on the reflectance component, and p(L) represents prior probability on background illumination component. These are described as follows:

Likelihood p(S|R, L)

Most retinex methods assume that the estimated error ξ = S − R × L is a random variable with a Gaussian distribution with zero mean and variance \( {\delta}_1^2 \); it can be defined as follows:

$$ p\left(\left.S\right|R,L\right)=N\left(\xi \left|0,{\delta}_1^2\right.\mathbf{1}\right), $$
(14)

where 1 is the identity matrix. The maximum likelihood estimation (MLE) solution of Gaussian distribution is equivalent to the solution of ordinary least squares (OLS); it is described as follows:

$$ \left(\widehat{R},\widehat{L}\right)=\arg \underset{R,L}{\min}\frac{1}{2}{\left\Vert S-R\times L\right\Vert}_2^2. $$
(15)

However, the OLS solution is sensitive to outliers although it is easy to solve. If the error is Laplacian distributed (p(S|R, L) = L(ξ|0, δ 2 1)), the MLE solution is equivalent to the least absolute deviation (LAD) solution; it is defined as follows:

$$ \left(\widehat{R},\widehat{L}\right)=\arg \underset{R,L}{\min }{\left\Vert S-R\times L\right\Vert}_1. $$
(16)

Compared with the OLS method, the LAD method is robust to outliers [30].In this paper, we assume the error vector follows an additive combination of two independent distributions: Gaussian and Laplacian distributions.

Prior p(R)

According to the assumption that reflectance is piece-wise [20, 26, 31], the gradient of reflectance is Laplacian distributed:

$$ p(R)=L\left(\nabla R\left|0,{\delta}_3\mathbf{1}\right.\right). $$
(17)

Prior p(L)

Two constraints are assigned for the background illumination component. The first is that the background illumination is smooth spatially [26, 32], and the second is to constrain the scale of the background illumination. For the first assumption, the gradient of background illumination is Gaussian distributed:

$$ {p}_1(L)=\mathrm{N}\left(\nabla L\left|0,{\delta}_4^2\right.\mathbf{1}\right). $$
(18)

For the second constraint, the background illumination follows Gaussian distribution based on the white patch assumption:

$$ {p}_2(L)=\mathrm{N}\left(\nabla L\left|{L}_0,{\delta}_5^2\right.\mathbf{1}\right). $$
(19)

So, the prior p(L) can be rewritten as follows:

$$ p(L)={p}_1(L){p}_2(L). $$
(20)

In order to effectively estimate the reflectance and illumination components, the maximum a posteriori (MAP) problem is equivalent to the solution of an energy minimization problem:

$$ {\displaystyle \begin{array}{l}E\left(R,L\right)=\frac{1}{2}{\left\Vert S-R\times L\right\Vert}_2^2+\alpha {\left\Vert S-R\times L\right\Vert}_1+\beta {\left\Vert \nabla R\right\Vert}_1\\ {}\kern4.899998em +\frac{1}{2}{\gamma}_1{\left\Vert \nabla L\right\Vert}_2^2+\frac{1}{2}{\gamma}_2{\left\Vert L-{L}_0\right\Vert}_2^2\kern0.5em s.t.S\le L,\end{array}} $$
(21)

where α, β, γ 1, and γ 2 are positive parameters which control each item in the proposed model. L 0 is the mean value of Gaussian distribution, which can be simply estimated by averaging J d .

However, this assumption may cause halo artifacts around strong edges, because the background illumination is not always smooth and may change sharply in some areas. So, spatial adjustment parameters are introduced to constrain the total variation regularization strength in this paper. The proposed spatially adaptive L 1 L 2VR model can be modified as follows:

$$ {\displaystyle \begin{array}{l}E\left(R,L\right)=\frac{1}{2}{\left\Vert S-R\times L\right\Vert}_2^2+\alpha {\left\Vert S-R\times L\right\Vert}_1+\beta w{\left\Vert \nabla R\right\Vert}_1\\ {}\kern4.299998em +{\gamma}_1v{\left\Vert \nabla L\right\Vert}_1+\frac{1}{2}{\gamma}_2{\left\Vert L-{L}_0\right\Vert}_2^2\kern0.5em s.t.S\le L,\end{array}} $$
(22)

where w and v are weight parameters that control the TV regularization strength; they are defined as follows:

$$ {\displaystyle \begin{array}{l}w=g\left(T\left(x,y\right)\right)\\ {}v=g\left(B\left(x,y\right)\right).\end{array}} $$
(23)

Here, g(x) is a monotone decreasing function. It should be large where T(x, y) and B(x, y) are small, and vice versa. This function can be defined as follows:

$$ \mathrm{g}(x)={\left[1-{\left(\frac{x}{K}\right)}^2\right]}^2, $$
(24)

where K is set to be equal to the 90% value of the cumulative distribution function of T(x, y) or B(x, y). T(x, y) and B(x, y) are defined:

$$ {\displaystyle \begin{array}{l}B\left(x,y\right)=H\left[\left|\nabla J\left(x,y\right)\right|-{\alpha}_t\times t\left(x,y\right)\right]\\ {}T\left(x,y\right)=H\left[t\left(x,y\right)\right]\\ {}H(x)=\left\{\begin{array}{c}0,\kern1em \mathrm{x}<0\\ {}x,\kern0.5em \mathrm{x}\ge 0\end{array}\right.\end{array}} $$
(25)

where J(x, y) is the gradient of J d , α t is the suppression strength factor, and t(x, y) is the suppression term. The suppression term is defined:

$$ {\displaystyle \begin{array}{l}t\left(x,y\right)=\left|\nabla J\left(x,y\right)\right|\times {\omega}_{\sigma}\left(x,y\right)\\ {}{\omega}_{\sigma}\left(x,y\right)=\frac{H\left({DoG}_{\sigma}\left(x,y\right)\right)}{\mathrm{sum}\left(H\left({DoG}_{\sigma}\left(x,y\right)\right)\right)}\\ {}{DoG}_{\sigma}\left(x,y\right)=\frac{1}{2\pi {\left(4\sigma \right)}^2}\exp \left(-\frac{x^2+{y}^2}{2{\left(4\sigma \right)}^2}\right)-\frac{1}{2{\pi \sigma}^2}\exp \left(-\frac{x^2+{y}^2}{2{\sigma}^2}\right).\end{array}} $$
(26)

Split Bregman algorithm for the proposed model

Since two unknown variables exist in Eq. (22), the minimization problem (22) can be divided into two subproblems as follows:

$$ {E}_1(R)=\frac{1}{2}{\left\Vert \frac{S}{L}-R\right\Vert}_2^2+\alpha {\left\Vert \frac{S}{L}-R\right\Vert}_1+\beta w{\left\Vert \nabla R\right\Vert}_1, $$
(27)
$$ {E}_2(L)=\frac{1}{2}{\left\Vert \frac{S}{R}-L\right\Vert}_2^2+\alpha {\left\Vert \frac{S}{R}-L\right\Vert}_1+{\gamma}_1v{\left\Vert \nabla L\right\Vert}_1+\frac{1}{2}{\gamma}_2{\left\Vert L-{L}_0\right\Vert}_2^2. $$
(28)

By introducing an auxiliary variable d and an error b, E 1(R) can be rewritten as follows:

$$ {E}_1(R)=\frac{1}{2}{\left\Vert \frac{S}{L}-R\right\Vert}_2^2+\alpha {\left\Vert \frac{S}{L}-R\right\Vert}_1+\beta w{\left\Vert d\right\Vert}_1+\frac{\beta w\lambda}{2}{\left\Vert d-\nabla R-b\right\Vert}_2^2. $$
(29)

The above equation can be transformed:

$$ \left\{\begin{array}{c}{d}^j=\underset{d}{\arg \min \beta w}{\left\Vert d\right\Vert}_1+\frac{\beta w\lambda}{2}{\left\Vert d-\nabla R-{b}^{j-1}\right\Vert}_2^2\\ {}{R}^j=\underset{R}{\arg \min}\frac{1}{2}{\left\Vert \frac{S}{L^{j-1}}-R\right\Vert}_2^2+\alpha {\left\Vert \frac{S}{L^{j-1}}-R\right\Vert}_1+\frac{\beta w\lambda}{2}{\left\Vert d-\nabla R-{b}^{j-1}\right\Vert}_2^2\\ {}{b}^j={b}^{j-1}+\nabla R-{d}^j\end{array}\right.. $$
(30)

The above subproblems (30) can be explicitly solved by split Bregman algorithm [33]. The computation procedure is described in Algorithm 1.

figurea

Synthesis reflectance and naturalness

According to the retinex theory, the light reaching the eyes depends on the reflectance and illumination, so we integrate reflectance component R and mapped naturalness component \( {J}_n^m \) to get the final enhanced result:

$$ {S}_e=R\times {J}_n^m. $$
(32)

Results

In our experiments, a large number of images were tested. Due to space limitations, we have only shown some of the test images. Some original test images with a logo are downloaded from http://www.bioon.com/. Moreover, the experimental results were calculated using MATLAB R2011a under Windows 7. In this paper, the parameters α, β, γ 1, γ 2, and tol are set 0.1, 0.01, 0.1, 0.2, 10, and 0.01, empirically. The proposed algorithm consists of three major modules, an anti-degraded module, naturalness preserved module, and contrast enhancement module. Figure 4 shows the experimental results obtained by using the anti-degraded model. It can be seen that the color cast caused by scattering has been prevented. However, the details of haze-free images cannot still be seen in the dark areas. Figure 5 shows the results by using the proposed L 1 L 2VR method. Overexposed areas are suppressed, details in the dark areas are enhanced, and global lightness and contrast are enhanced, while color cast phenomenon cannot be prevented during the process of the L 1 L 2VR method. Our proposed algorithm combines the AD method and the L 1 L 2VR method, which can successfully overcome the above drawbacks. As shown in Fig. 6, the results obtained by the proposed algorithm gave a natural look, not only enhanced details in dark areas, but also prevented color cast.

Fig. 4
figure4

Anti-degraded model. a, c Original images. b, d Haze-free images of (a) and (c)

Fig. 5
figure5

L 1 L 2VR results of endoscopic images. a–d Original images. eh L 1 L 2VR results of (ad)

Fig. 6
figure6

AD-L 1 L2VR results of endoscopic images. ad original images. eh AD-L 1 L 2VR results of (ad)

Discussion

Subjective assessment

Figures 7, 8, 9, and 10 illustrated the experimental results of endoscopic images obtained by the proposed method and other relevant state-of-the-art methods. In this paper, the proposed algorithm was compared to the existing local histogram equalization (LHE) [34], MSR [22], spatially adaptive retinex variational model (SARV) [35], and Naturalness preserved enhancement algorithm (NPEA) [36] methods. Clearly, LHE, MSR, and SARV gave over-enhanced images, simultaneously saturating the resulting images much further and causing color cast. For example, Fig. 7b, d and Fig. 8b, d caused serious color cast. And Fig. 8c and Fig. 10c caused overexposure phenomenon. In addition, LHE did not enhance the dark areas successfully. In the dark area of Fig. 7b and Fig. 8b, the processed images by LHE did not get pleasing results. Compared with the above methods, NPEA and the proposed AD-L 1 L 2VR got very natural-looking images. The enhanced image reveals a lot of details in the background regions as well as other interesting areas, while NPEA introduced a light color cast, especially Fig. 8e and Fig. 10e.

Fig. 7
figure7

Enhanced results using different methods. a Original image. b LHE. c MSR. d SARV. e NPEA. f AD-L 1 L 2VR

Fig. 8
figure8

Enhanced results using different methods. a Original image. b LHE. c MSR. d SARV. e NPEA. f AD-L 1 L 2VR

Fig. 9
figure9

Enhanced results using different methods. a Original image. b LHE. c MSR. d SARV. e NPEA. f AD-L 1 L 2VR

Fig. 10
figure10

Enhanced results using different methods. a Original image. b LHE. c MSR. d SARV. e NPEA. f AD-L 1 L 2VR

Objective assessment

Three objective metrics are further presented to evaluate the enhanced results, including RMS contrast [37], discrete entropy [38], and lightness of error (LOE) [36]. The averages of evaluation scores are also calculated for overall comparison. In Tables 1, 2, and 3, red color data represents the highest value, green color data represents the second highest, and blue color data represents the third highest.

Table 1 Contrast of compared methods
Table 2 Discrete entropy of compared methods
Table 3 LOE values of compared methods

The first metric is RMS contrast; it is defined as follows:

$$ C={\left\{\frac{1}{m\cdot n}\sum \limits_{i=0}^m\sum \limits_{j=0}^n{\left[I\left(i,j\right)-\mu \right]}^2\right\}}^{1/2}, $$
(33)

Table 1 shows the quantitative measurement results of the contrast. As shown in Table 1, LHE and MSR have higher contrast than the other methods. However, the proposed algorithm and the NPEA method have a better subjective assessment performance than the other three methods.

The second metric is discrete entropy; it is defined as Eq. (34). Higher discrete entropy means more information revealed from the original images. From Table 2, LHE and the proposed algorithm have higher discrete entropy than others.

$$ H=-\sum \limits_{x\in L}q(x)\ln q(x), $$
(34)

The third metric is LOE, which is used to evaluate naturalness preservation. According to the definition of LOE, a smaller LOE value means representing better naturalness preservation. As shown Table 3, the proposed algorithm has the best naturalness preservation performance.

In summary, compared with other relevant state-of-the-art enhancement methods, the proposed algorithm not only preserve more details and prevent halo artifacts, but also prevent color cast caused by scattering. The proposed algorithm can achieve good quality from both subjective and objective assessments. It is a good way to increase diagnosis and reduce misdiagnosis for endoscopic imaging.

Conclusions

This paper proposes a novel image enhancement algorithm via anti-degraded model and L 1 L 2-based variational retinex theory (AD-L 1 L 2VR) for non-uniform illumination endoscopic images, which not only enhances the details of the image but also preserves the naturalness. The anti-degraded model is used to prevent color cast caused by scattering. In order to estimate the reflectance and background illumination component, L 1 L 2VR is proposed to constrain the TV regularization strength. Moreover, logarithmic Laplacian-based gamma correction is conducted on the naturalness component for preventing color cast caused by non-uniform illumination or scattering. Experimental results demonstrate that the proposed algorithm has a better performance than the other existing algorithms.

References

  1. 1.

    F Zhao, H Nie, H Chen, Group buying spectrum auction algorithm for fractional frequency reuses cognitive cellular systems. Ad Hoc Netw. 58, 239–246 (2017)

  2. 2.

    F Zhao, W Wang, H Chen, Q Zhang. “Interference alignment and game-theoretic power allocation in MIMO Heterogeneous Sensor Networks communications,” Signal Processing. 2016;126(C):173–9.

  3. 3.

    F Zhao, L Wei, H Chen, Optimal time allocation for wireless information and power transfer in wireless powered communication systems. IEEE Trans. Veh. Technol. 65(3), 1830–1835 (2016)

  4. 4.

    G Iddan, G Merson, A Glukhovsky, P Swain, Wireless capsule endoscopy. Nature 405, 417–418 (2000)

  5. 5.

    DK Iakovidis, A Koulaouzidis, Software for enhanced video capsule endoscopy: challenges for essential progress. Nat. Rev. Gastroenterol. Hepatol. 12(3), 172–186 (2015)

  6. 6.

    A Koulaouzidis, E Rondonotti, A Karargyris, Small-bowel capsule endoscopy: a ten-point contemporary review. World J. Gastroenterol. 19(24), 3726–3746 (2013)

  7. 7.

    T Matsuda, A Ono, M Sekiguchi, et al., Advances in image enhancement in colonoscopy for detection of adenomas. Nat. Rev. Gastroenterol. Hepatol. 14, 305–314 (2017)

  8. 8.

    SC Ng et al., The efficacy of cap-assisted colonoscopy in polyp detection and cecal intubation: a meta-analysis of randomized controlled trials. Am. J. Gastroenterol. 107, 1165–1173 (2012)

  9. 9.

    VP Deenadayalu, V Chadalawada, DK Rex, 170 degrees wide-angle colonoscope: effect on efficiency and miss rates. Am. J. Gastroenterol. 99, 2138–2142 (2004)

  10. 10.

    H Fatima et al., Wide-angle (WA) (170° angle of view) versus standard (ST) (140°angle of view) colonoscopy [abstract]. Gastrointest. Endosc. 63, AB204 (2013)

  11. 11.

    T Uraoka et al., A novel extra-wide-angle-view colonoscope: a simulated pilot study using anatomic colorectal models. Gastrointest. Endosc. 77, 480–483 (2013)

  12. 12.

    IM Gralnek et al., Comparison of standard forward-viewing mode versus ultrawide-viewing mode of a novel colonoscopy platform: a prospective, multicenter study in the detection of simulated polyps in an in vitro colon model (with video). Gastrointest. Endosc. 77, 472–479 (2013)

  13. 13.

    N Hasan et al., A novel balloon colonoscope detects significantly more simulated polyps than a standard colonoscope in a colon model. Gastrointest. Endosc. 80, 1135–1140 (2014)

  14. 14.

    F Deeba, SK Mohammed, FM Bui, et al., Unsupervised abnormality detection using saliency and Retinex based color enhancement. Conf Proc IEEE Eng Med Biol Soc 2016, 3871–3874 (2016)

  15. 15.

    Y Miyake, T Kouzu, et al., Development of new electronic endoscopes using the spectral images of an internal organ. 13th Color Imaging Conf Final Program Proc 13(3), 261–263 (2005)

  16. 16.

    Y Hamamoto, T Endo, et al., Usefulness of narrow-band imaging endoscopy for diagnosis of Barrett’s esophagus. J. Gastroenterol. 39(1), 14–20 (2004)

  17. 17.

    A Hoffman et al., Recognition and characterization of small colonic neoplasia with high-definition colonoscopy using i-Scan is as precise as chromoendoscopy. Dig. Liver Dis. 42(1), 45–50 (2010)

  18. 18.

    H Okuhata et al., Application of the real-time Retinex image enhancement for endoscopic images. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2013, 3407–3410 (2013)

  19. 19.

    S Rajput, SR Suralkar, Comparative study of image enhancement techniques. Int J Comput Sci Mobile Comput 2(1), 11 (2013)

  20. 20.

    L Wang, L Xiao, H Liu, Z Wei, Variational Bayesian method for retinex. IEEE Trans. Image Process. 23(8), 3381–3396 (2014)

  21. 21.

    G Gianini, A Rizzi, E Damiani, A Retinex model based on absorbing Markov chains. Inf. Sci. 327(C), 149–174 (2016)

  22. 22.

    Z Rahman, DJ Jobson, GA Woodell, Multi-scale retinex for color image enhancement. in Proc. ICIP 3, 1003–1006 (1996)

  23. 23.

    ZU Rahman, DJ Jobson, GA Woodell, Retinex processing for automatic image enhancement. J. Electron. Imag. 13(1), 100–110 (2004)

  24. 24.

    ZU Rahman, DJ Jobson, GA Woodell, Investigating the relationship between image enhancement and image compression in the context of the multi-scale retinex. J Vis Commun Image Representation 22(3), 237–250 (2011)

  25. 25.

    K Horn, Determining lightness from an image. Comput Graph Image Process (4), 277–299 (1974)

  26. 26.

    MK Ng, W Wang, A total variation model for retinex. SIAM J. Imag. Sci. 4(1), 345–365 (2011)

  27. 27.

    E Land, J Mccann, Lightness and retinex theory. J. Opt. Soc. Am. 61(61), 1–11 (1971)

  28. 28.

    K He, J Sun, X Tang, Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011)

  29. 29.

    K He, J Sun, X Tang, Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013)

  30. 30.

    Wang D, Lu H, Yang M H. “Least soft-threshold squares tracking,” Proceedings of the IEEE conference on computer vision and pattern recognition. 2013;9(4):2371–8.

  31. 31.

    D Zosso, G Tran, SJ Osher, Non-local Retinex—a unifying framework and beyond[J]. Siam J Imaging Sci 8(2), 787–826 (2015)

  32. 32.

    R Kimmel, M Elad, D Shaked, R Keshet, I Sobel, A variational framework for Retinex. Int. J. Comput. Vis. 52(1), 7–23 (2003)

  33. 33.

    T Goldstein, S Osher, The split Bregman algorithm for L1 regularized problems. SIAM J. Imaging Sci. 2(2), 323–343 (2009)

  34. 34.

    TK Kim, JK Paik, BS Kang, Contrast enhancement system using spatially adaptive histogram equalization with temporal filtering. IEEE Trans. Consum. Electron. 44(1), 82–87 (1998)

  35. 35.

    X Lan, H Shen, L Zhang, A spatially adaptive retinex variational model for the uneven intensity correction of remote sensing images. Signal Process. 101(8), 19–34 (2014)

  36. 36.

    S Wang, J Zheng, HM Hu, Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 22(9), 3538–3548 (2013)

  37. 37.

    E Peli, Contrast in complex images. J Opt Soc Am A Opt Image Sci 7(10), 2032–2032 (1990)

  38. 38.

    Z. Ye, H. Mohamadian, and Y. Ye, “Discrete entropy and relative entropy study on nonlinear clustering of underwater and arial images,” in Proc. IEEE Int. Conf. Control Appl., pp. 318–323 (2007)

Download references

Acknowledgements

The authors would like to thank Image Engineering &Video Technology Lab for the support.

Funding

This work was supported by the Major Science Instrument Program of the National Natural Science Foundation of China under Grant 61527802, the General Program of National Nature Science Foundation of China under Grants 61371132 and 61471043, and the International S&T Cooperation Program of China under Grant 2014DFR10960.

Availability of data and materials

All data are fully available without restriction.

Author information

ZR and TX came up with the algorithm and improved the algorithm. In addition, ZR wrote and revised the paper. JL, JG, and GS implemented the algorithm of LHE, MSR, and SARV for image enhancement, and HW recorded the data. All authors read and approved the final manuscript.

Correspondence to Tingfa Xu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Rao, Z., Xu, T., Luo, J. et al. Non-uniform illumination endoscopic imaging enhancement via anti-degraded model and L 1 L 2-based variational retinex. J Wireless Com Network 2017, 205 (2017) doi:10.1186/s13638-017-0989-x

Download citation

Keywords

  • Non-uniform endoscopic imaging enhancement
  • Anti-degraded model and L 1 L 2 based variational retinex (AD-L 1 L 2VR)
  • Dark channel prior (DCP)
  • Logarithmic Laplacian-based gamma correction (LLGC)
  • Gaussian-Laplacian distribution