- Research
- Open Access

# Non-uniform illumination endoscopic imaging enhancement via anti-degraded model and *L*
_{1}
*L*
_{2}-based variational retinex

- Zhitao Rao
^{1}, - Tingfa Xu
^{1}Email author, - Jiqiang Luo
^{2}, - Jie Guo
^{1}, - Guokai Shi
^{1}and - Hongqing Wang
^{1}

**2017**:205

https://doi.org/10.1186/s13638-017-0989-x

© The Author(s). 2017

**Received:**26 September 2017**Accepted:**15 November 2017**Published:**4 December 2017

## Abstract

In this paper, we propose a novel image enhancement algorithm via anti-degraded model and *L*
_{1}
*L*
_{2}-based variational retinex (AD-*L*
_{1}
*L*
_{2}VR) for non-uniform illumination endoscopic images. Firstly, a haze-free endoscopic image is obtained by an anti-degraded model named dark channel prior (DCP). For getting a more accurate transmission map, it is refined by using a guided image filtering. Secondly, the haze-free endoscopic image is decomposed into detail and naturalness components by light filtering. Thirdly, a logarithmic Laplacian-based gamma correction (LLGC) is added to the naturalness component for preventing color cast and uneven lighting. Fourthly, we assume that the error between the detail component of the haze-free image and the product of associated reflectance and background illumination follows Gaussian-Laplacian distribution. So, the associated reflectance component can be obtained by using the proposed *L*
_{1}
*L*
_{2}-based variational retinex (*L*
_{1}
*L*
_{2}VR) model. Finally, the recombination of modified naturalness component and associated reflectance component become the final result. Experimental results demonstrate that the proposed algorithm reveals more details in the background regions as well as other interesting areas and can mostly prevent the color cast. It has a better performance on increasing diagnosis and reducing misdiagnosis than other existing enhancement methods.

## Keywords

- Non-uniform endoscopic imaging enhancement
- Anti-degraded model and
*L*_{1}*L*_{2}based variational retinex (AD-*L*_{1}*L*_{2}VR) - Dark channel prior (DCP)
- Logarithmic Laplacian-based gamma correction (LLGC)
- Gaussian-Laplacian distribution

## 1 Introduction

Nowadays, signal processing [1–3] and image processing [4–6] get more and more attention. Amongst them, medical image processing has been widely researched. Diseases of the gastrointestinal tract, such as bleeding, ulcer, and tumor, are threatening humans’ health. However, traditional diagnosis methods, such as barium meal examination, X-ray scanning, and CT, are invasive to the human body. After the invention of the endoscope, it is possible to generate color images directly inside the human body. In 2000, capsule endoscopy (CE) [4] was introduced, and it has been a useful tool for examining the entire gastrointestinal tract, especially for the screening of small-bowel diseases [5, 6]. However, endoscopic diagnosis is time consuming due to the great amount of the video data and low contrast image quality. Besides, the misdiagnosis rate may increase because of blurred edges and low contrast of images.

With the purpose of improving diagnostic detection rate, several techniques or devices have been proposed to optimize visualization [7]. Add-on devices [8], wide-angle colonoscopies [9–12], and balloon colonoscope [13] are the examples of advanced imaging devices which have been widely used to improve diagnostic yield. Color enhancement technique at the chip level or as a post-processing step is another method to increase the image quality and diagnostic yield [14]. The Fuji Intelligent Color Enhancement (FICE, Fujinon Inc.) system [15], narrow-band imaging (NBI) [16], I-scan [17], and retinex [18] are the examples of post-processing color enhancement algorithms which have been widely used [6, 19].

In order to diagnose disease successfully, we make the following three major contributions on color image enhancement technologies. First, preserving naturalness as much as possible because color is one of the most important bases for diagnosing pathology. Second, preventing the scattering caused by mucosa and digestive juice inside the human body. Third, more and more details should be displayed for improving the diagnostic rate.

Amongst the various enhancement methods, retinex has received much attention due to its simplicity and effectiveness in enhancing non-uniform illumination images [20]. To simulate the mechanism of HVS, it is an ill-posed problem that computes illumination or reflectance from a single observed image. In order to get more accurate results, many modified retinex methods have been proposed. Path-based retinex [21] methods are the simplest, but they usually necessitate high computational complexity. Jobson et al. had proposed the multi-scale retinex (MSR) [22, 23] algorithm and the color restored multi-scale retinex (CRMSR) [24] algorithm. Partial differential equation (PDE) was introduced to the retinex algorithm in 1974 [25]. However, when solving the Poisson equations, extra artifacts will be caused by the hard thresholding operator in PDE-based retinex algorithms. In 2011, a total variational retinex method (TVR) [26] was proposed. In 2014, a variational Bayesian model for retinex was proposed by Wang et al. [20].

However, the issue of atmospheric transmission is important but not considered in existing classical enhancement methods. In real endoscopic imaging scenes, images captured by endoscope will be influenced by the scattering and absorption of mucosa and digestive juice inside the human body. In order to overcome this drawback, a novel endoscopic imaging enhancement via anti-degraded model and *L*
_{1}
*L*
_{2}-based variational retinex (AD-*L*
_{1}
*L*
_{2}VR) is proposed in this paper. Before enhancing an observed image, an anti-degraded model named dark channel prior (DCP) is provided to get haze-free endoscopic images. Secondly, the haze-free endoscopic image will be decomposed into detail and naturalness components by light filtering, and these two parts will be discussed separately. Then, a logarithmic Laplacian-based gamma correction (LLGC) is added to the naturalness component for preventing color cast and uneven lighting. In addition, most retinex methods assume that the estimated error between observed image and the product of reflectance and background illumination is a random variable with a Gaussian distribution with zero mean and variance *δ*
^{2}. The maximum likelihood estimation (MLE) solution of Gaussian distribution is equivalent to the solution of ordinary least squares (OLS). However, the OLS solution is sensitive to outliers although it is easy to solve. If the error is Laplacian distributed, the MLE solution is equivalent to the least absolute deviation (LAD) solution. Compared with the OLS method, the LAD method is robust to outliers. So, we assume that the error between the detail component of the haze-free image and the product of the associated reflectance and background illumination follows Gaussian-Laplacian distribution. So, the associated reflectance component will be obtained by using the proposed *L*
_{1}
*L*
_{2}-based variational retinex (*L*
_{1}
*L*
_{2}VR) model. Finally, the recombination of associated reflectance and naturalness component become the final result.

This paper is organized as follows: the optical model and the retinex model are described in Section 2. Section 3 gives the details of the proposed algorithm and the optimization strategy. Experimental results and evaluation are shown in Section 4. Discussion is shown in Section 5. Section 6 concludes the paper.

## 2 Background

### 2.1 Optical model

*c*is the

*c*th color channel,

*I*

^{ c }(

*i*,

*j*) is the hazy image captured by the endoscope,

*J*

^{ c }(

*i*,

*j*) is haze-free image,

*A*is global atmospheric light, and

*t*(

*i*,

*j*) is the transmission map that describes the non-scattered light between the observed objects and the camera. The first term

*J*

^{ c }(

*i*,

*j*) ×

*t*(

*i*,

*j*) represents the direct attenuation; the second term (1 −

*t*(

*i*,

*j*)) ×

*A*

^{ c }represents the airlight.

*t*(

*x*,

*y*) can be expressed as follows:

*β*represents the scattering coefficient of the atmosphere and

*d*(

*i*,

*j*) represents the scene depth between the endoscope and the diseased tissue.

### 2.2 Retinex model

*S*is the observed image and

*L*and

*R*are illumination and reflectance components respectively.

where *s* = log(*S*), *l* = log(*L*), and *r* = log(*R*).

## 3 Method

*L*

_{1}

*L*

_{2}VR. The proposed approach consists of three major modules, an anti-degraded module, naturalness preserved module, and contrast enhancement module, as illustrated in Fig. 3. Firstly, the original endoscopic image captured by the camera is processed through DCP to get a haze-free image. Secondly, the haze-free endoscopic image is decomposed into the detail component and the naturalness component. The naturalness component is processed by LLGC for preventing color cast. The detail component is decomposed into reflectance and illumination components via

*L*

_{1}

*L*

_{2}VR. Finally, the synthesis of reflectance and mapped naturalness components become the final output enhanced image.

### 3.1 Anti-degraded model

*J*

^{ c }(

*x*,

*y*) represents the

*c*th channel of

*J*(

*x*,

*y*) and Ω

_{1}represents the local patch centered at (

*i*,

*j*). According to [25], Eq. (1) is equivalent to:

*J*is close to zero. So, the transmission map is defined as follows:

*J*

^{ c }(

*i*,

*j*) ×

*t*(

*i*,

*j*) may be close to zero, so the transmission map should be restricted to a low bound

*t*

_{0}, it is set 0.1 empirically. So, the haze-free endoscopic image can be obtained as follows:

### 3.2 Image decomposition based on light filtering

*J*

^{ c }is the haze-free image of the

*c*th color channel, \( {J}_d^c \) and \( {J}_n^c \) are the detail and naturalness components of

*J*

^{ c }, and

*α*is a weighting factor. Based on the assumption that naturalness is the local maxima for each pixel,

*α*is defined as follows:

_{2}is a five-pixel square in four-connectivity in which

*J*

^{ c }(

*i*,

*j*) is the center pixel, and

*J*

^{ c }(

*i*,

*j*) is the maximum value of three color channels on the location

*J*

^{ c }(

*i*,

*j*).

After decomposition, the detail and naturalness components can be processed separately. The mapped naturalness component can be acquired by using LLGC. And reflectance can be obtained by the processing detail component via *L*
_{1}
*L*
_{2}VR.

### 3.3 Naturalness mapping using LLGC

*W*is the white value,

*ς*is a small positive constant, Ω

_{3}represents a region which is selected from the top 0.1% brightest values in its dark channel, and

*D*(

*i*,

*j*) is the color difference. max and min represent maximum and minimum values, respectively,

*N*is the number of pixels of region Ω

_{3}, and

*μ*and

*b*are the location and scale parameters, respectively of the Laplacian distribution.

### 3.4 Image decomposition via *L*
_{1}
*L*
_{2}VR

Computing illumination or reflectance from a single observed image is ill-posed. To solve this problem, many variational retinex models have been proposed. In this paper, the reflectance component is acquired via a *L*
_{1}
*L*
_{2}VR model with simultaneously estimating illumination and reflectance.

*S*=

*R*×

*L*, where

*S*represents the acquired detail component

*J*

_{ d }in this paper and

*R*and

*L*represent reflectance and background illumination components, respectively, of

*J*

_{ d }. According to Bayes’ theorem, the general physical model can be seen as a posterior distribution:

*p*(

*R*,

*L*|

*S*) represents posterior distribution,

*p*(

*S*|

*R*,

*L*) represents the likelihood,

*p*(

*R*) represents prior probability on the reflectance component, and

*p*(

*L*) represents prior probability on background illumination component. These are described as follows:

### Likelihood *p*(*S*|*R*, *L*)

*ξ*=

*S*−

*R*×

*L*is a random variable with a Gaussian distribution with zero mean and variance \( {\delta}_1^2 \); it can be defined as follows:

**1**is the identity matrix. The maximum likelihood estimation (MLE) solution of Gaussian distribution is equivalent to the solution of ordinary least squares (OLS); it is described as follows:

*p*(

*S*|

*R*,

*L*) =

*L*(

*ξ*|0,

*δ*

_{2}

**1**)), the MLE solution is equivalent to the least absolute deviation (LAD) solution; it is defined as follows:

Compared with the OLS method, the LAD method is robust to outliers [30].In this paper, we assume the error vector follows an additive combination of two independent distributions: Gaussian and Laplacian distributions.

### Prior *p*(*R*)

### Prior *p*(*L*)

*p*(

*L*) can be rewritten as follows:

*α*,

*β*,

*γ*

_{1}, and

*γ*

_{2}are positive parameters which control each item in the proposed model.

*L*

_{0}is the mean value of Gaussian distribution, which can be simply estimated by averaging

*J*

_{ d }.

*L*

_{1}

*L*

_{2}VR model can be modified as follows:

*w*and

*v*are weight parameters that control the TV regularization strength; they are defined as follows:

*g*(

*x*) is a monotone decreasing function. It should be large where

*T*(

*x*,

*y*) and

*B*(

*x*,

*y*) are small, and vice versa. This function can be defined as follows:

*K*is set to be equal to the 90% value of the cumulative distribution function of

*T*(

*x*,

*y*) or

*B*(

*x*,

*y*).

*T*(

*x*,

*y*) and

*B*(

*x*,

*y*) are defined:

*∇J*(

*x*,

*y*) is the gradient of

*J*

_{ d },

*α*

_{t}is the suppression strength factor, and

*t*(

*x*,

*y*) is the suppression term. The suppression term is defined:

### 3.5 Split Bregman algorithm for the proposed model

*d*and an error

*b*,

*E*

_{1}(

*R*) can be rewritten as follows:

### 3.6 Synthesis reflectance and naturalness

*R*and mapped naturalness component \( {J}_n^m \) to get the final enhanced result:

## 4 Results

*α*,

*β*,

*γ*

_{1},

*γ*

_{2}, and tol are set 0.1, 0.01, 0.1, 0.2, 10, and 0.01, empirically. The proposed algorithm consists of three major modules, an anti-degraded module, naturalness preserved module, and contrast enhancement module. Figure 4 shows the experimental results obtained by using the anti-degraded model. It can be seen that the color cast caused by scattering has been prevented. However, the details of haze-free images cannot still be seen in the dark areas. Figure 5 shows the results by using the proposed

*L*

_{1}

*L*

_{2}VR method. Overexposed areas are suppressed, details in the dark areas are enhanced, and global lightness and contrast are enhanced, while color cast phenomenon cannot be prevented during the process of the

*L*

_{1}

*L*

_{2}VR method. Our proposed algorithm combines the AD method and the

*L*

_{1}

*L*

_{2}VR method, which can successfully overcome the above drawbacks. As shown in Fig. 6, the results obtained by the proposed algorithm gave a natural look, not only enhanced details in dark areas, but also prevented color cast.

## 5 Discussion

### 5.1 Subjective assessment

*L*

_{1}

*L*

_{2}VR got very natural-looking images. The enhanced image reveals a lot of details in the background regions as well as other interesting areas, while NPEA introduced a light color cast, especially Fig. 8e and Fig. 10e.

### 5.2 Objective assessment

Table 1 shows the quantitative measurement results of the contrast. As shown in Table 1, LHE and MSR have higher contrast than the other methods. However, the proposed algorithm and the NPEA method have a better subjective assessment performance than the other three methods.

The third metric is LOE, which is used to evaluate naturalness preservation. According to the definition of LOE, a smaller LOE value means representing better naturalness preservation. As shown Table 3, the proposed algorithm has the best naturalness preservation performance.

In summary, compared with other relevant state-of-the-art enhancement methods, the proposed algorithm not only preserve more details and prevent halo artifacts, but also prevent color cast caused by scattering. The proposed algorithm can achieve good quality from both subjective and objective assessments. It is a good way to increase diagnosis and reduce misdiagnosis for endoscopic imaging.

## 6 Conclusions

This paper proposes a novel image enhancement algorithm via anti-degraded model and *L*
_{1}
*L*
_{2}-based variational retinex theory (AD-*L*
_{1}
*L*
_{2}VR) for non-uniform illumination endoscopic images, which not only enhances the details of the image but also preserves the naturalness. The anti-degraded model is used to prevent color cast caused by scattering. In order to estimate the reflectance and background illumination component, *L*
_{1}
*L*
_{2}VR is proposed to constrain the TV regularization strength. Moreover, logarithmic Laplacian-based gamma correction is conducted on the naturalness component for preventing color cast caused by non-uniform illumination or scattering. Experimental results demonstrate that the proposed algorithm has a better performance than the other existing algorithms.

## Declarations

### Acknowledgements

The authors would like to thank Image Engineering &Video Technology Lab for the support.

### Funding

This work was supported by the Major Science Instrument Program of the National Natural Science Foundation of China under Grant 61527802, the General Program of National Nature Science Foundation of China under Grants 61371132 and 61471043, and the International S&T Cooperation Program of China under Grant 2014DFR10960.

### Availability of data and materials

All data are fully available without restriction.

### Authors’ contributions

ZR and TX came up with the algorithm and improved the algorithm. In addition, ZR wrote and revised the paper. JL, JG, and GS implemented the algorithm of LHE, MSR, and SARV for image enhancement, and HW recorded the data. All authors read and approved the final manuscript.

### Competing interests

The authors declare that they have no competing interests.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- F Zhao, H Nie, H Chen, Group buying spectrum auction algorithm for fractional frequency reuses cognitive cellular systems. Ad Hoc Netw. 58, 239–246 (2017)View ArticleGoogle Scholar
- F Zhao, W Wang, H Chen, Q Zhang. “Interference alignment and game-theoretic power allocation in MIMO Heterogeneous Sensor Networks communications,” Signal Processing. 2016;126(C):173–9.Google Scholar
- F Zhao, L Wei, H Chen, Optimal time allocation for wireless information and power transfer in wireless powered communication systems. IEEE Trans. Veh. Technol. 65(3), 1830–1835 (2016)Google Scholar
- G Iddan, G Merson, A Glukhovsky, P Swain, Wireless capsule endoscopy. Nature 405, 417–418 (2000)View ArticleGoogle Scholar
- DK Iakovidis, A Koulaouzidis, Software for enhanced video capsule endoscopy: challenges for essential progress. Nat. Rev. Gastroenterol. Hepatol. 12(3), 172–186 (2015)View ArticleGoogle Scholar
- A Koulaouzidis, E Rondonotti, A Karargyris, Small-bowel capsule endoscopy: a ten-point contemporary review. World J. Gastroenterol. 19(24), 3726–3746 (2013)View ArticleGoogle Scholar
- T Matsuda, A Ono, M Sekiguchi, et al., Advances in image enhancement in colonoscopy for detection of adenomas. Nat. Rev. Gastroenterol. Hepatol. 14, 305–314 (2017)View ArticleGoogle Scholar
- SC Ng et al., The efficacy of cap-assisted colonoscopy in polyp detection and cecal intubation: a meta-analysis of randomized controlled trials. Am. J. Gastroenterol. 107, 1165–1173 (2012)View ArticleGoogle Scholar
- VP Deenadayalu, V Chadalawada, DK Rex, 170 degrees wide-angle colonoscope: effect on efficiency and miss rates. Am. J. Gastroenterol. 99, 2138–2142 (2004)View ArticleGoogle Scholar
- H Fatima et al., Wide-angle (WA) (170° angle of view) versus standard (ST) (140°angle of view) colonoscopy [abstract]. Gastrointest. Endosc. 63, AB204 (2013)Google Scholar
- T Uraoka et al., A novel extra-wide-angle-view colonoscope: a simulated pilot study using anatomic colorectal models. Gastrointest. Endosc. 77, 480–483 (2013)View ArticleGoogle Scholar
- IM Gralnek et al., Comparison of standard forward-viewing mode versus ultrawide-viewing mode of a novel colonoscopy platform: a prospective, multicenter study in the detection of simulated polyps in an in vitro colon model (with video). Gastrointest. Endosc. 77, 472–479 (2013)View ArticleGoogle Scholar
- N Hasan et al., A novel balloon colonoscope detects significantly more simulated polyps than a standard colonoscope in a colon model. Gastrointest. Endosc. 80, 1135–1140 (2014)View ArticleGoogle Scholar
- F Deeba, SK Mohammed, FM Bui, et al., Unsupervised abnormality detection using saliency and Retinex based color enhancement. Conf Proc IEEE Eng Med Biol Soc 2016, 3871–3874 (2016)Google Scholar
- Y Miyake, T Kouzu, et al., Development of new electronic endoscopes using the spectral images of an internal organ. 13th Color Imaging Conf Final Program Proc 13(3), 261–263 (2005)Google Scholar
- Y Hamamoto, T Endo, et al., Usefulness of narrow-band imaging endoscopy for diagnosis of Barrett’s esophagus. J. Gastroenterol. 39(1), 14–20 (2004)View ArticleGoogle Scholar
- A Hoffman et al., Recognition and characterization of small colonic neoplasia with high-definition colonoscopy using i-Scan is as precise as chromoendoscopy. Dig. Liver Dis. 42(1), 45–50 (2010)View ArticleGoogle Scholar
- H Okuhata et al., Application of the real-time Retinex image enhancement for endoscopic images. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2013, 3407–3410 (2013)Google Scholar
- S Rajput, SR Suralkar, Comparative study of image enhancement techniques. Int J Comput Sci Mobile Comput 2(1), 11 (2013)Google Scholar
- L Wang, L Xiao, H Liu, Z Wei, Variational Bayesian method for retinex. IEEE Trans. Image Process. 23(8), 3381–3396 (2014)MathSciNetView ArticleMATHGoogle Scholar
- G Gianini, A Rizzi, E Damiani, A Retinex model based on absorbing Markov chains. Inf. Sci. 327(C), 149–174 (2016)MathSciNetView ArticleGoogle Scholar
- Z Rahman, DJ Jobson, GA Woodell, Multi-scale retinex for color image enhancement. in Proc. ICIP 3, 1003–1006 (1996)Google Scholar
- ZU Rahman, DJ Jobson, GA Woodell, Retinex processing for automatic image enhancement. J. Electron. Imag. 13(1), 100–110 (2004)View ArticleGoogle Scholar
- ZU Rahman, DJ Jobson, GA Woodell, Investigating the relationship between image enhancement and image compression in the context of the multi-scale retinex. J Vis Commun Image Representation 22(3), 237–250 (2011)View ArticleGoogle Scholar
- K Horn, Determining lightness from an image. Comput Graph Image Process (4), 277–299 (1974)Google Scholar
- MK Ng, W Wang, A total variation model for retinex. SIAM J. Imag. Sci. 4(1), 345–365 (2011)MathSciNetView ArticleMATHGoogle Scholar
- E Land, J Mccann, Lightness and retinex theory. J. Opt. Soc. Am. 61(61), 1–11 (1971)View ArticleGoogle Scholar
- K He, J Sun, X Tang, Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011)View ArticleGoogle Scholar
- K He, J Sun, X Tang, Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013)View ArticleGoogle Scholar
- Wang D, Lu H, Yang M H. “Least soft-threshold squares tracking,” Proceedings of the IEEE conference on computer vision and pattern recognition. 2013;9(4):2371–8.Google Scholar
- D Zosso, G Tran, SJ Osher, Non-local Retinex—a unifying framework and beyond[J]. Siam J Imaging Sci 8(2), 787–826 (2015)MathSciNetView ArticleMATHGoogle Scholar
- R Kimmel, M Elad, D Shaked, R Keshet, I Sobel, A variational framework for Retinex. Int. J. Comput. Vis. 52(1), 7–23 (2003)View ArticleMATHGoogle Scholar
- T Goldstein, S Osher, The split Bregman algorithm for L1 regularized problems. SIAM J. Imaging Sci. 2(2), 323–343 (2009)MathSciNetView ArticleMATHGoogle Scholar
- TK Kim, JK Paik, BS Kang, Contrast enhancement system using spatially adaptive histogram equalization with temporal filtering. IEEE Trans. Consum. Electron. 44(1), 82–87 (1998)View ArticleGoogle Scholar
- X Lan, H Shen, L Zhang, A spatially adaptive retinex variational model for the uneven intensity correction of remote sensing images. Signal Process. 101(8), 19–34 (2014)View ArticleGoogle Scholar
- S Wang, J Zheng, HM Hu, Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 22(9), 3538–3548 (2013)View ArticleGoogle Scholar
- E Peli, Contrast in complex images. J Opt Soc Am A Opt Image Sci 7(10), 2032–2032 (1990)View ArticleGoogle Scholar
- Z. Ye, H. Mohamadian, and Y. Ye, “Discrete entropy and relative entropy study on nonlinear clustering of underwater and arial images,” in Proc. IEEE Int. Conf. Control Appl., pp. 318–323 (2007)Google Scholar