Skip to main content

An image denoising method based on BP neural network optimized by improved whale optimization algorithm

Abstract

As an important part of smart city construction, traffic image denoising has been studied widely. Image denoising technique can enhance the performance of segmentation and recognition model and improve the accuracy of segmentation and recognition results. However, due to the different types of noise and the degree of noise pollution, the traditional image denoising methods generally have some problems, such as blurred edges and details, loss of image information. This paper presents an image denoising method based on BP neural network optimized by improved whale optimization algorithm. Firstly, the nonlinear convergence factor and adaptive weight coefficient are introduced into the algorithm to improve the optimization ability and convergence characteristics of the standard whale optimization algorithm. Then, the improved whale optimization algorithm is used to optimize the initial weight and threshold value of BP neural network to overcome the dependence in the construction process, and shorten the training time of the neural network. Finally, the optimized BP neural network is applied to benchmark image denoising and traffic image denoising. The experimental results show that compared with the traditional denoising methods such as Median filtering, Neighborhood average filtering and Wiener filtering, the proposed method has better performance in peak signal-to-noise ratio.

1 Introduction

Smart city refers to the use of new generation of information technology to connect and integrate the city’s systems and services, so as to improve the efficiency of resource utilization, optimize city services, achieve fine and dynamic management, and improve the quality of life of citizens [1]. From the perspective of technological development, smart city realizes comprehensive perception, ubiquitous interconnection, pervasive computing and converged application through the support of the new generation of information technology represented by the Internet of things (IOT), mobile Internet, cloud computing and big data [2,3,4]. For example, relying on the IoT and mobile Internet can realize intelligent perception, identification, positioning, tracking and supervision while with the help of cloud computing, big data and intelligent analysis technology, massive information processing and decision support can be realized.

Intelligent transportation system (ITS) is one of the important parts of smart city construction. Through monitoring, traffic flow distribution optimization and other technologies, the traffic information network system is improved and unified intelligent urban traffic integrated management and service system is established to realize the full sharing of traffic information, real-time monitoring and dynamic management [5, 6]. ITS is an important technical means of modern transportation management in recent years. Its essence is a traffic system which uses sensors, network transmission, intelligent control, multimedia processing and other technologies to carry out large-scale, comprehensive, real-time and efficient monitoring and management [7, 8]. ITS has brought huge economic and social benefits to the transportation industry and has become an important part of modern transportation system [9]. However, surveillance video and image acquisition in ITS are often limited by various factors such as acquisition equipment and external environment and there is a certain amount of noise, which affects the visibility and further processing of the image. Image denoising is one of the useful technical means to improve the image visibility which can provide a higher quality image source for further image processing.

Image denoising, as an image processing technology aiming at eliminating image noise, has attracted more and more attention from scholars. For image segmentation, image recognition and other technologies, image denoising can enhance the performance of segmentation and recognition model and improve the accuracy of segmentation and recognition results [10, 11]. The traditional image denoising methods are mainly divided into space domain denoising method and transform domain method according to the different denoising processing domain [12,13,14,15]. Among them, the spatial domain denoising methods directly analyze and process the gray value of the pixels in the noised image, mainly including Mean filtering, Median filtering and Gaussian filtering. These methods can suppress the noise in the image to a certain extent, but because all the pixels in the image are processed in the same way, it also blurs the edges and details in the image while smoothing the noise, resulting in the loss of image information [16]. Transform domain denoising methods transform the noised image from the spatial domain to the corresponding transform domain by using different transformation methods, mainly including Fourier transform, Wavelet transform, sparse representation and denoising methods combined with non-local denoising theory, etc. These methods have greatly promoted the research of image denoising [17].

In recent years, neural network has been widely used in image feature extraction, image denoising, image segmentation and image recognition because of its good nonlinear mapping, parallel computing and adaptive ability [18]. In the research of various application fields of neural network image processing, the majority are back propagation (BP) neural network. But BP neural network uses gradient descent search algorithm, so it is sensitive to the initial weight threshold [19, 20], and the algorithm is easy to converge to the local minimum. To solve this problem, there are usually two kinds of improvement methods [10]: the first is to use the additional momentum method, the adaptive learning rate method with momentum term and Levenberg–Marquardt algorithm, etc., to improve the operation mechanism of the algorithm and improve the search ability of the algorithm. This kind of method can increase the running speed, but it cannot solve the problem of falling into local minimum caused by the randomness of weight threshold. The second is to use swarm intelligence algorithms such as genetic algorithm, particle swarm optimization algorithm to determine the initial weight threshold of BP neural network and then use Levenberg–Marquardt algorithm for optimization, which can effectively reduce the possibility of BP neural network falling into local minimum.

Whale optimization algorithm (WOA) [21] is a new swarm intelligence optimization algorithm proposed by Seyedali Mirjalili in 2016 which simulating the behavior of whale predation. Compared with particle swarm optimization (PSO), gravitational search algorithm (GSA) and other classical intelligent optimization algorithms, this algorithm has the advantages of simple structure, less parameters and strong optimization ability [22, 23]. It has been widely concerned by many scholars and applied to different practical problems. However, like other intelligent optimization algorithms, the standard whale optimization algorithm also has the problems of slow convergence speed and easy to fall into local optimum.

1.1 Our contributions

Based on the existing research results, this paper proposes an image denoising method based on BP neural network optimized by improved whale optimization algorithm. Compared with Median filtering(MF), Neighborhood averaging filtering(NAF), Wiener filtering(WF) and traditional BP neural network denoising algorithm, the peak signal-to-noise ratio (PSNR) of the image is significantly improved. To be specific, the major contributions of this paper are threefold:

  • The nonlinear convergence factor and adaptive weight coefficient are introduced into the whale optimization algorithm to improve the standard whale optimization algorithm, which improves the optimization ability and convergence characteristics of the algorithm.

  • The improved whale optimization algorithm is used to optimize the initial weights and thresholds of BP neural network, which overcomes the dependence on the initial weights and thresholds in the construction process of BP neural network and shortens the training time of the neural network.

  • The optimized BP neural network is applied to image denoising. The benchmark images and UAS data set are selected for denoising experiment. The results show that MSWOA-BP has better denoising effect than other denoising algorithms.

1.2 Organization of the rest paper

The rest of the paper is arranged as follows. Section 2 reviews traffic image denoising, the application of BP neural network in image denoising and the improvement of whale optimization algorithm. In Sect. 3, an improved whale optimization algorithm based on mixed strategy and an image denoising method based on improved BP neural network are proposed. In Sect. 4, the complexity of the improved whale optimization algorithm is analyzed. In Sect. 5, experiments are carried out to verify the effectiveness of the improved whale optimization algorithm, and then the proposed denoising method is compared with the traditional method to verify the effectiveness of the method. Finally, some concluding remarks are presented in Sect. 6.

2 Related work

Recently, traffic image denoising has been studied. In [24], a traffic image denoising method based on low rank decomposition is proposed. The experimental results show that the proposed algorithm can remove noise more effectively and retain the effective information of the image compared with BM3D, KSVD and PCA and has better performance in the evaluation index PSNR and visual effect. Shijie and Yanbing [25] introduced the sparse representation method to traffic image processing and realized a traffic image denoising algorithm based on K-SVD orthogonal matching pursuit. Experimental results show that compared with the traditional image enhancement methods (Median filtering, Mean filtering, Wavelet filtering) and sparse representation image enhancement method based on DCT redundant dictionary, the proposed algorithm can remove traffic image noise more effectively and get higher PSNR. Aiming at the problem of image denoising in license plate recognition preprocessing, an adaptive coupled partial differential equation (PDE) denoising model is proposed in [26], which can more effectively remove the mixed noise in the license plate image, protect the edge information of the image and improve the PSNR. The denoised image is more conducive to the subsequent character segmentation and recognition. It can effectively improve the recognition accuracy of license plate image.

Generally speaking, whether traffic images or other images, the research of denoising technology can be divided into three directions: (1) traditional filtering denoising methods [27], such as median filter and mean filter; (2) the denoising technology based on wavelet [28, 29]; (3) the denoising technology based on neural network and deep learning [30]. Among them, the denoising technology based on neural network and deep learning is a research hotspot in recent years. Wei et al. [31] propose an alternating directional 3D quasi-recurrent neural network for hyperspectral image denoising, and experimental results demonstrate significant improvement over the state-of-the-art under various noise settings, in terms of both restoration accuracy and computation time. Zhang et al. [32] propose a deep unfolding model dubbed AMP-Net, and experimental results show that the proposed AMP-Net has better reconstruction accuracy than other state-of-the-art methods with high reconstruction speed and a small number of network parameters. Shi et al. [33] propose a novel dual-attention denoising network to consider the global dependence and correlation between spatial and spectral information, and experimental results on simulated and real data substantiate the superiority of our method both visually and quantitatively when compared with state-of-the-art methods. Wang [11] studied the image denoising model based on BP neural network and pointed out that the model is sensitive to the initial weight and easy to fall into the local minimum, which cannot converge to the specified target. By introducing the PSO algorithm into the denoising model, the randomness of the initial weight is effectively reduced, and the denoising effect of the model is improved.

In order to solve the problems of BP neural network in the process of image denoising that the convergence speed is slow and it is easy to fall into the local extreme value, great efforts have been made to improve it. According to the distribution characteristics of speckle noise in medical image, Jing et al. [34] proposed a denoising model based on the nonlinear mapping ability of BP neural network. The model can remove the speckle noise in the ultrasonic image and retain the edge features of the image, but the whole model has poor denoising effect and long running time. Yan et al. [35] proposed a new image denoising method based on BP neural network improved by improved copula distribution estimation algorithm. The centroid mutation operator is introduced on the basis of traditional copula distribution estimation algorithm, which effectively overcomes the shortcoming that BP neural network is easy to fall into local optimum in image denoising, but the overall complexity of the model is high and the overall performance is poor. Aiming at the shortcomings of BP neural network in image denoising, such as slow convergence speed, easy to fall into local minimum and so on, Wang et al. [36] used gray wolf optimization algorithm to optimize the initial parameters of the neural network. The convergence speed of the improved neural network is greatly improved, but the denoising effect still needs to be improved. On the basis of [36, 37] improves the noise removal ability of the model through enhancing the global optimization ability of gray wolf optimization algorithm by changing the convergence mode of convergence factor and introducing differential evolution to increase population diversity. It has improved the denoising ability of the image polluted by Gaussian noise of the improved model, but it has not greatly improved the denoising ability of other noises.

In order to solve the problems of slow convergence speed, premature convergence and easy to fall into local optimum in the whale optimization algorithm, effective improvements have been made to the standard algorithm [38]. Arora et al. [39] proposed CWOA by using chaotic map to optimize the update probability p in WOA, and it is verified that the algorithm has a high convergence speed through the benchmark function test. Mirjalili et al. [40] fused simulated annealing algorithm and whale optimization algorithm to optimize the optimization accuracy of the algorithm and improve the global search capability. They also obtained good results in the public test of 18 data sets in the UCI library. Oliva et al. [41] proposed OBWOA by using reverse learning for initialization that enhanced the algorithm to explore the search space and applied OBWOA to three different diode models to estimate the parameters of the solar cell. Experiments showed that the method has good performance. Zhang Yong et al. [42] proposed MOWA by first initializing the population by generating a chaotic sequence from the segmented logistic chaotic map to maintain the initial population diversity and introducing segmented adaptive weights at the same time to balance the algorithm’s global exploration and local development capabilities. The performance of the algorithm was verified by testing on six benchmark functions. In order to solve large-scale complex optimization problems, Long et al. [43] first used the opposite learning strategy to initialize the population position, designed a nonlinear convergence factor, coordinated the exploration and development capabilities of the algorithm, and introduced diversity mutation operations to improve the algorithm’s premature convergence. However, although the existing research has improved the optimization effect of the standard WOA, the problems such as the balance between global exploration and local development ability, slow convergence speed and easy to fall into local optimization still need to be further studied [44,45,46].

3 Methodology

3.1 Improved whale optimization algorithm

3.1.1 Whale optimization algorithm

The inspiration of WOA originates from the unique foraging behavior of humpback whales that it follows a spiral-shaped path to forage by creating a bubble net after finding a foraging target(as shown in Fig. 1). The specific behaviors include encircling prey, bubble-net attacking and search for prey [47]. Among them, the bubble-net attacking can be divided into shrinking encircling mechanism and spiral updating position. The foraging behavior of humpback whales can be described mathematically as follows:

Fig. 1
figure 1

Bubble-net feeding behavior of humpback whales

Encircling Prey

In the process of hunting, the whale needs to determine the position of its prey before encircling and capturing its prey. Since the position of prey in the search space is unknown, WOA assumes that the optimal solution in the current population is the target prey. After determining the prey, other whales in the population will update their positions according to the current position of the prey. In the process of predation, the following formula is used to update the position:

$$D = \left| {C \cdot X^{*} (t) - X(t)} \right|$$
(1)
$$X(t+1)=X^*(t)-A\cdot D$$
(2)

where t is the current number of iterations, \(X^*(t)\) indicates the optimal whale’s position in the tth iteration. X(t) is the position of the individual whale in the tth iteration. A and C are coefficients calculated using the following equation:

$$A=2a\cdot r_1-a$$
(3)
$$C=2\cdot r_2$$
(4)

where \(r_1\) and \(r_2\) are random numbers in [0,1], a decreases linearly from 2 to 0 as the number of iterations increases.

Bubble-net attacking

In WOA, the whale chooses one of the two behaviors of shrinking encircling and spiral updating position according to the random probability value. The shrinking encircling is realized by Eq. (3), the convergence factor a decreases linearly as the number of iterations increases, and A is a random value between \([-a,a]\) . If \(\mid A\mid \le 1\), the individual whale gradually approaches the prey after updating its position and completes the shrinking and encircling of the prey according to Eq. (2). The spiral updating position first needs to calculate the distance between the individual whale and the prey and then prey on the prey in a spiral form. The mathematical model of this behavior is described as follows:

$$D^{\prime} = \left| {X^{*} (t) - A(t)} \right|$$
(5)
$$X(t+1)=X'\cdot e^{bl}\cdot \cos (2\pi l)+X^*(t)$$
(6)

where Eq. (5) indicates the distance between the ith whale and its prey. l is a random number in [\(-1,1\)], when \(l=-1\), the individual whale is the closest to the foraging target, and when \(l=1\), the individual whale is the farthest from the foraging target; b is a helical constant used to define the shape of the logarithmic spiral.

In the process of bubble-net attacking, the behavior probability of whale choosing position updating mode is set as p. WOA assumes that the probability p of performing these two kinds of predatory behavior is 50%, respectively. The attacking process can be described as follows:

$$X(t + 1) = \left\{ {\begin{array}{*{20}l} {X^{*} (t) - A \cdot D} \hfill & {p < 0.5} \hfill \\ {D^{\prime} \cdot e^{{bl}} \cdot \cos (2\pi l) + X^{*} (t)} \hfill & {p > 0.5} \hfill \\ \end{array} } \right.$$
(7)

Search for prey

When \(\mid A\mid >1\), WOA randomly selects an individual in the population to conduct random search. The mathematical model is as follows:

$$D = \left| {C \cdot X_{{{\mathrm{rand}}}} - X} \right|$$
(8)
$$X(t+1)=X_{\mathrm{{rand}}}-A\cdot D$$
(9)

where \(X_{\mathrm{{rand}}}\) is the position of a random individual in the current population.

3.1.2 WOA based on mixed strategy

Although the standard whale optimization algorithm has few parameters and excellent performance, it still has the problems of low solution accuracy, slow convergence speed and easy to fall into local optimum. In order to overcome these shortcomings, this paper improves the algorithm from the two aspects of location update strategy and prevention of falling into local optimum and proposes an improved whale optimization algorithm based on mixed strategy (MSWOA).

Nonlinear convergence factor Similar to other swarm intelligence optimization algorithms, whale optimization algorithm will have an imbalance between global search ability and local search ability in the process of finding the optimal solution. In the standard whale optimization algorithm, the parameter that controls whether the algorithm performs global search or local search is A. When \(\mid A \mid \ge 1\), the algorithm performs random global search with a probability of 0.5; when \(\mid A \mid <1\), the algorithm performs local search. The value of A mainly depends on the convergence factor a. Therefore, the change of the convergence factor a is very important for the algorithm to find the optimal solution. The larger convergence factor can provide strong global search ability and avoid the algorithm from falling into local optimum, while the smaller convergence factor can provide strong local search ability and accelerate the convergence speed of the algorithm. In the standard whale optimization algorithm, the convergence factor a decreases linearly as the number of iterations increases. This change makes the convergence speed of the algorithm too slow. Therefore, inspired by the improvement strategy proposed in [44, 46, 48], this paper introduces a nonlinear strategy to dynamically adjust the original convergence factor a on the premise of not changing the overall change trend of the original convergence factor a, which can balance the global search ability and local search ability of the algorithm and accelerate the convergence speed of the algorithm. The mathematical formula is as follows:

$$a=\frac{2}{e-1}\times \left( e^{1-(\frac{t}{T})^\lambda }-1\right)$$
(10)

where t is the current number of iterations, T is the maximum number of iterations, and \(\lambda\) is a constant. In this paper, we take \(\lambda =3\). The change of the improved convergence factor is shown in Fig. 2.

Fig. 2
figure 2

Change of the improved convergence factor a

Adaptive weight coefficient In the standard whale optimization algorithm, the position of the prey is the position of the optimal solution. However, during the execution of the algorithm, the position \(X^*(t)\) of the prey in the position update Eqs. (2), (6) and (9) is not fully utilized. In this paper, adaptive weight coefficient is introduced to use the optimal solution to improve the accuracy of the algorithm. The adaptive weight coefficient is defined as follows:

$$\omega (t)=\frac{4}{\pi }\arctan \left( \frac{t}{{\mathrm{max}}\_{\mathrm{iter}}}\right)$$
(11)
$$X(t+1)=\omega (t)\cdot X^*(t)-A\cdot D \quad \left| A \right|<1,p<0.5$$
(12)
$$X(t+1)=\omega (t)\cdot X_{\mathrm{{rand}}}-A\cdot D \quad \left| A \right| \ge 1,p<0.5$$
(13)
$$X(t+1)=D'\cdot e^{bl}\cdot \cos (2\pi l)+(1-\omega (t))\cdot X^*(t) \quad p\ge 0.5$$
(14)

where \(max\_iter\) is the maximum number of iterations.

In Eqs. (15), (16), the adaptive weight coefficient \(\omega (t)\) increases with the increase in iteration, which means that the position of prey is closer to the position of theoretical optimal solution after each iteration, so as to improve the optimization accuracy of the algorithm; in spiral updating, with the increase in iteration times, whales will continue to approach prey, and at this time, a smaller weight is used to facilitate the whale to better find whether there is a better solution around the prey while updating the position, thereby improving the local search ability of the algorithm .

3.2 MSWOA-BP denoising method

3.2.1 BP neural network

BP neural network is a kind of neural network with signal forward propagation and error back-propagation. It is mainly composed of three neuron layers, namely input layer, hidden layer and output layer [49, 50]. The neurons between layers are all connected, and the neurons in the layer are not connected. The number of hidden layers is not fixed [51]. Take one hidden layer as an example, its structure is shown in Fig. 3.

Fig. 3
figure 3

BP neural network structure

In the process of forward propagation, the input variables are transferred to the output layer through the input node in input layer and hidden layer, and the required information is obtained at the output layer. The weights and offsets of each network node should be consistent, and the neuron states of each layer are transmitted through activation function. If the error cannot be converged, then the error back propagation is carried out. In the process of error back propagation, contrary to forward propagation, the error is propagated layer by layer from the output layer to the input layer. By back propagating the output error, the error is allocated to all the cells of the original layer, so as to further obtain the error signal of each layer, and then correct the weight of each cell. Through repeated training, the weights and offsets of the nodes are modified until the expected error accuracy is achieved.

3.2.2 Principle of MSWOA-BP method

The WOA is introduced to use its global search ability to optimize the initial weights and thresholds of BP neural network. As the purpose of establishing the network is to make the error between the denoised image and the original image as small as possible, the mean square error (MSE) between the real output value and the expected output value of network is used as the fitness function. The principle diagram of MSWOA-BP denoising method is shown Fig. 4.

Fig. 4
figure 4

Principle of MSWOA-BP method

In image degradation, the gray value of a point is closely related to the gray value of its neighborhood. Even if the gray value of a pixel is the same, if its neighborhood is different, the gray value after degradation is not the same. In view of the great influence of the field on the gray value, this paper uses the idea of block when extracting image features. As shown in Fig. 4, the noise picture is padded firstly, and then the gray value corresponding to each pixel and several pixels around it is used as the input of the neural network. The entire padded noised image is traversed using a sliding window to obtain all the input data of the neural network training. For the output data, the output of the model is the noiseless pixel point corresponding to the center of the sliding window. The determination of the input and output data also determines the number of nodes in the input layer and output layer of the BP neural network.

3.2.3 Steps of MSWOA-BP method

The specific steps to obtain the initial weights and thresholds of the BP neural network by improved whale optimization algorithm are as follows:

Step 1 Determine the topology of BP neural network, and set the number of nodes in the input layer, hidden layer, and output layer of the neural network (inputnum, hiddennum, outputnum) and other related parameters;

Step 2 Initialize N, \(Max\_iter\), lb, ub and other parameters in whale optimization algorithm, and calculate the dimension dim according to the number of nodes in each layer of BP neural network set in step1. And then initialize the initial position \(X_i=(x_{i1},x_{i2},\dots ,x_{idim})^T,(i=1,2,\dots ,N)\) of the whale population \(X=(X_1,X_2,\dots ,X_N)\). The dimension dim is calculated as follows:

$$dim=inputnum*hiddennum+hiddennum+hiddennum*outputnum+outputnum$$
(15)

Step 3 Calculate and compare the fitness \(f_i\) of each gray wolf individual in the population to determine the whale individual with the best current fitness value, defined as \(X^*\) . In this paper, the weight and threshold corresponding to the individual in the population are taken as the weight and threshold of BP neural network in each iteration, and the absolute error value of BP neural network is taken as its fitness value;

Step 4 Update the position of every whale in the population. If \(p<0.5\) and \(\mid A\mid <1\) , use Eq. (3), otherwise use Eq. (9). If \(p\ge 0.5\) , use Eq. (6);

Step 5 Evaluate the whole whale population to find the global optimal whale and its positions;

Step 6 If the termination condition (maximum number of iterations) of the algorithm is met, it ends; otherwise, go to Step 3 and continue the algorithm iteration;

Step 7 The global optimal solution \(X^*\) obtained by whale optimization algorithm is transformed into the weight (W1, W2) and threshold (B1, B2) of BP neural network.

For the network model obtained in the previous steps, in order to test the denoising ability of the model, this paper selects common benchmark images for denoising experiments. By adding Gaussian noise of different intensities to the benchmark images, the noised images are obtained and used as the input of the model. Through the denoising model, the denoised images are obtained. The denoised images are compared with the standard images to obtain the peak signal-to-noise ratio (PSNR).

4 Complexity analysis of improved algorithm

The computational complexity depends on the number of executions of the algorithm. In the standard whale optimization algorithm, the computational complexity is mainly related to the population size N, the maximum number of iterations \(Max\_iter\) and the search space dimension Dim, and the computational complexity is \(O(N\cdot Max\_iter\cdot Dim)\). According to the process of the improved algorithm, the introduction of a nonlinear convergence factor adjustment strategy and an adaptive weight coefficient increases the amount of operations for \(O(N\cdot Max\_iter\cdot Dim)\), and its computational complexity is \(O(2\cdot N\cdot Max\_iter\cdot Dim)\), which is higher than that of the standard whale optimization algorithm.

The space complexity is mainly affected by the population size N and the search space dimension Dim. Since the improved algorithm has no changes in the population size and search space dimension, the space complexity is \(O(N\cdot Dim)\).

5 Results and discussion

The simulation experiments in this paper are based on Intel(R) Core(TM) i5-7300HQ CPU, 2.50 GHz, 16 GB memory and Windows 10 (64-bit) operating system, and the programming software is Matlab R2016b.

5.1 Evaluation index of denoising effect

In order to evaluate the denoising effect of the proposed denoising model, the peak signal-to-noise ratio (PSNR) is used to compare the denoised image with the standard image. PSNR is the ratio of peak signal energy to average noise signal energy. The unit is the logarithmic variable component (dB). This evaluation method is based on mean square error (MSE), and the formula of mean square error of two gray-scale images A(i, j) and B(i, j) with size of \(m\times n\) is shown in Eq. (16):

$${\mathrm{{MSE}}}=\frac{1}{mn}\sum _{i=1}^{m} \sum _{j=1}^{n} \Vert A(i,j)-B(i,j)\Vert ^2$$
(16)

Based on MSE, the formula of PSNR is shown in Eq. (17):

$${\mathrm{{PSNR}}}=10\cdot \log _{10}\left( \frac{\mathrm{{MAX}}_I^2}{\mathrm{{MSE}}}\right)$$
(17)

where \(\mathrm{{MAX}}_I\) is the maximum pixel value in the image.

5.2 The performance of improved whale optimization algorithm

In order to test the optimization ability of the improved whale optimization algorithm (MSWOA), 12 benchmark functions are selected and compared with particle swarm optimization (PSO), gray wolf optimization (GWO) and standard whale optimization algorithm (WOA). The 12 benchmark functions are shown in Table 1.

Table 1 12 benchmark functions
Table 2 Parameter setting
Table 3 Performance comparison of four algorithms

In this work, the values of population size and \({\mathrm{{Max}}}_{\mathrm{{iter}}}\) of all algorithms are 30 and 500, respectively. The initial parameter settings of each algorithm used in the experiment are shown in Table 2.

To test the optimization performance of MSWOA, Table 3 shows the test results of PSO, GWO, WOA and MSWOA after 30 independent experiments, which are compared from the average value (AVG) and standard deviation (STD) of the optimal target value.

It can be seen from Table 3 that among the 12 benchmark functions, the MSWOA algorithm proposed in this paper has the best results in 10 test functions(F1–F10), and only the results of function F11 and F12 are slightly worse than other algorithms. Taking the average value and standard deviation of the optimal solution as the evaluation criteria, it can be seen that the solution accuracy of the MSWOA is significantly better than the other three intelligent algorithms and has strong robustness. In order to more intuitively reflect the convergence speed and the ability to jump out of local optimal value of MSWOA, the convergence curves of each algorithm are shown in Fig. 5.

Fig. 5
figure 5

Convergence curve of MSWOA and other algorithms

It can be seen from the convergence curve in Fig. 5 that MSWOA has the fastest convergence speed in the optimization process of 12 benchmark functions compared with PSO, GWO and standard WOA, which can effectively save optimization time. The convergence curve of the MSWOA algorithm has sudden changes in the decline range, especially when optimizing the five functions of F1, F2, F3, F4 and F5, there are multiple inflection points, which proves that its ability to jump out of the local optimum has been effectively enhanced. Therefore, compared with other intelligent algorithms, MSWOA has a stronger ability to jump out of the local optimum.

5.3 Selection of sliding window and hidden layer node

For the determination of the size of the sliding window and the number of nodes in the hidden layer, based on the basic criteria proposed in [52], this paper uses comparative experiment to compare the PSNR results after denoising through BP neural network for lena noised images with a Gaussian noise intensity (variance) of 0.05 when the size of the sliding window is 3, 5 and 7 and the number of nodes in the hidden layer is 13, 15 and 17, respectively. The experimental results are shown in Table 4. Based on the comparison results, in the next experiment, the sliding window size is \(5*5\), and the number of hidden layer nodes is 15.

Table 4 Performance of BP neural network with different number of hidden layer nodes

5.4 Image denoising experiment 1: benchmark images

In this experiment, the number of hidden layer nodes is 15, the training epochs is 500, the learning rate is 0.05, the training goal is 0.001 and the training function is trainlm. The Lena image with the size of \(512*512\) is used as the training sample of neural network. For the benchmark image, the imnoise() function in Matlab is used to add Gaussian noise with a mean value of 0 and different variances. The image after adding the Gauss noise is shown in Fig. 6. For the noised images with different noise intensity, Wiener filtering, Neighborhood average filtering, Median filtering, BP neural network, WOA-BP and MSWOA-BP are used to denoise them, respectively. The PSNR results after denoising are shown in Table 5.

Fig. 6
figure 6

Lena image with different Gaussian noise: a without noise, b variance of 0.03, c variance of 0.04, d variance of 0.06, e variance of 0.08

Table 5 PSNR of image denoising

It can be seen from Table 5 that as the noise intensity increases, the denoising effect of every denoising algorithm gradually weakens. Under the same noise intensity, the peak signal-to-noise ratio of MSWOA-BP is the highest, and the denoising effect is the best. In the case of Gaussian noise with a variance of 0.03, the maximum PSNR of MSWOA-BP for Lena noised images is 28.32.

In order to further verify the denoising effect(generalization ability) of MSWOA-BP, 4 benchmark images (Man, Boat, Butterfly, House) are selected for testing. The size of Man image and Boat image is \(512*512\), the size of Butterfly image and House image is \(256*256\). The Gaussian noise added into the images has a variance of 0.03. The trained network is used for denoising, and the results are shown in Fig. 7 and Table 6.

Fig. 7
figure 7

Denoising effect of different denoising algorithms for benchmark images

Table 6 PSNR with different denoising algorithms for benchmark images
Table 7 PSNR with different denoising algorithms for traffic images

It can be seen from Table 6 that the PSNR of MSWOA-BP are all less than 28.32, which shows that its generalization denoising effect is limited by the denoising ability of the training network, but the two values are close (the maximum difference is 3.61), indicating that the trained network has good generalization and denoising ability. Compared with traditional Median filtering, Neighborhood average filtering and Wiener filtering, MSWOA-BP has a higher PSNR, and compared with BP and WOA-BP, MSWOA-BP also has certain advantages in PSNR.

5.5 Image denoising experiment 2: traffic images

In this section, 6 traffic images from UAS (UESTC All-Day Scenery) [53] data set are selected and grayed for denoising experiment, as shown in Fig. 8, named TI-1,TI-2,TI-3,TI-4,TI-5,TI-6, respectively. The size of each image is 640 * 360. Figure 8a is used for training the neural networks, and Fig. 8b–f is used for verifying the denoising effect. The intensity of Gaussian noise is 0.01. The experimental results are shown in Table 7 and Fig. 9.

Fig. 8
figure 8

Original traffic images: a TI-1, b TI-2, c TI-3, d TI-4, e TI-5, f TI-6

Fig. 9
figure 9

Denoising effect of different denoising algorithms for traffic images: a original image, b noised image, c Median filtering result, d Neighborhood average filtering result, e Wiener filtering result, f BP result, g WOA-BP result, h MSWOA-BP result

Table 7 shows PSNR with different denoising algorithms for traffic images. It can be seen that for each traffic image, the denoising effect of MAWOA-BP is the best, which is obviously better than Median filtering, Neighborhood average filtering and Wiener filtering. Figure 9 shows denoising effect of different denoising algorithms for Fig. 8d. From the perspective of visual effect, each denoising algorithm has a certain denoising effect. The denoising result of Wiener filtering makes the image boundary more fuzzy, while the MSWOA-BP denoising result image is more visible.

6 Conclusion

Traffic image denoising is an useful technique for intelligent transportation system in smart city construction. This paper proposes an image denoising method based on BP neural network optimized by improved whale optimization algorithm to denoise the traffic image and improve the image quality and visibility. The improved whale optimization algorithm is used to optimize the initial weights and thresholds of the BP neural network to overcome the dependence of the initial weights and thresholds in the construction process of the BP neural network and shortens the training time of the neural network. The improved whale optimization algorithm’s ability to jump out of the local optimum has been effectively enhanced. And compared with other intelligent optimization algorithms, the convergence speed of it is also improved. The experimental results of image denoising show that compared with the traditional Median filtering, Neighborhood average filtering, Wiener filtering algorithm and other denoising algorithms, MSWOA-BP has a significant improvement in the peak signal-to-noise ratio (PSNR). In the future, the following research will be considered: (1) improve the performance of MSWOA-BP method; (2) introduce other intelligent optimization algorithms such as artificial bee colony algorithm(ABC); (3) apply the proposed method to more image fields, such as medical images and hyperspectral images.

Availability of data and materials

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

IoT:

Internet of Things

ITS:

Intelligent transportation system

WOA:

Whale optimization algorithm

BP:

Back propagation

PSO:

Particle swarm optimization

GSA:

Gravitational search algorithm

UAS:

UESTC all-day scenery

PDE:

Partial differential equation

CWOA:

Chaotic whale optimization algorithm

MSWOA:

Improved whale optimization algorithm based on mixed strategy

GWO:

Gray wolf optimization

MSE:

Mean square error

MF:

Median filtering

NAF:

Neighborhood average filtering

WF:

Wiener filtering

PSNR:

Peak signal-to-noise ratio

AVG:

Average value

STD:

Standard deviation

ABC:

Artificial bee colony algorithm

References

  1. R.A. Gonzalez, R.E. Ferro, D. Liberona, Government and governance in intelligent cities, smart transportation study case in Bogotá Colombia. Ain Shams Eng. J. 11(1), 25–34 (2020). https://doi.org/10.1016/j.asej.2019.05.002

    Article  Google Scholar 

  2. A. Molnar, Smart cities education: an insight into existing drawbacks. Telemat. Inform. (2020). https://doi.org/10.1016/j.tele.2020.101509

    Article  Google Scholar 

  3. R. Saborido, E. Alba, Software systems from smart city vendors. Cities 101, 102690 (2020). https://doi.org/10.1016/j.cities.2020.102690

    Article  Google Scholar 

  4. R. Rani, V. Kashyap, M. Khurana, Role of IoT-cloud ecosystem in smart cities: review and challenges. Mater. Today Proc. (2020). https://doi.org/10.1016/j.matpr.2020.10.054

    Article  Google Scholar 

  5. J. Yan, J. Liu, F.M. Tseng, An evaluation system based on the self-organizing system framework of smart cities: a case study of smart transportation systems in China. Technol. Forecast. Soc. Change 153, 119371 (2020)

    Article  Google Scholar 

  6. S. Saharan, S. Bawa, N. Kumar, Dynamic pricing techniques for intelligent transportation system in smart cities: a systematic review. Comput. Commun. (2019). https://doi.org/10.1016/j.comcom.2019.12.003

    Article  Google Scholar 

  7. Z. Karami, R. Kashef, Smart transportation planning: data, models, and algorithms. Transp. Eng. 2, 100013 (2020). https://doi.org/10.1016/j.treng.2020.100013

    Article  Google Scholar 

  8. S. Agachai, H.H. Wai, Smarter and more connected: future intelligent transportation system. IATSS Res. 42, 67–71 (2018). https://doi.org/10.1016/j.iatssr.2018.05.005

    Article  Google Scholar 

  9. S. Siuhi, J. Mwakalonge, Opportunities and challenges of smart mobile applications in transportation. J. Traffic Transp. Eng. 06, 96–106 (2016). https://doi.org/10.1016/j.jtte.2016.11.001

    Article  Google Scholar 

  10. H. Wang, N. Menke, T. Jin, The application of bat neural network algorithm in image denoising. Microelectron. Comput. 35, 121–124 (2018). https://doi.org/10.19304/j.cnki.issn1000-7180.2018.09.026

    Article  Google Scholar 

  11. H. Wang, Researching image denoising model based PSO-trainlm BP. Math. Pract. Theory 44(21), 137–142 (2014)

    Google Scholar 

  12. M. Yuan, G. Chen, Study on improved algorithm of median filtering based on grey correlation. Geom. Spat. Inf. Technol. 43(5), 124–127130 (2020)

    Google Scholar 

  13. W. Zhang, C. Liang, X. Gao, Design of median filtering algorithm with multistage threshold. Comput. Era 5, 9–12 (2020). https://doi.org/10.16644/j.cnki.cn33-1094/tp.2020.05.003

    Article  Google Scholar 

  14. T. Sun, S. Cui, Denoising method of super Gaussian signal based on kurtosis ICA and eigen image filtering. J. Hebei Norm. Univ. (Nat. Sci. Ed.) 44, 209–214 (2020). https://doi.org/10.13763/j.cnki.jhebnu.nse.2020.03.004

    Article  Google Scholar 

  15. R. Xu, Z. Wang, T. Zong, Edge enhancement of medical image based on improved Gaussian filter. Inf. Technol. 44(4), 75–78 (2020). https://doi.org/10.13274/j.cnki.hdzj.2020.04.016

    Article  Google Scholar 

  16. H. Xue, H. Cui, Research on image restoration algorithms based on BP neural network. J. Vis. Commun. Image Represent. 59, 204–209 (2019). https://doi.org/10.1016/j.jvcir.2019.01.014

    Article  Google Scholar 

  17. C. Zuo, Research on image nonlocal mean denoising method. Ph.D. Thesis, National University of Defense Science and technology (2016)

  18. J. Su, W. Yang, Image segmentation algorithm based on BP neural network. Ind. Control Comput. 28(12), 2932 (2015)

    Google Scholar 

  19. F. Duan, X. Xiong, X. Han, A new method for image segmentation based on BP neural network and gravitational search algorithm enhanced by cat chaotic mapping. Appl. Intell. Int. J. Artif. Intell. Neural Netw. Complex Probl. Solving Technol. 43(4), 855–873 (2015). https://doi.org/10.1007/s10489-015-0679-5

    Article  Google Scholar 

  20. Y. Wu, R. Gao, J. Yang, Prediction of coal and gas outburst: a method based on the BP neural network optimized by GASA. Process Saf. Environ. Prot. 133, 64–72 (2020). https://doi.org/10.1016/j.psep.2019.10.002

    Article  Google Scholar 

  21. S. Mirjalili, A. Lewis, The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016). https://doi.org/10.1016/j.advengsoft.2016.01.008

    Article  Google Scholar 

  22. X. Lei, H. Ouyang, L. Xiao, C. Fan, Research on image segmentation based on equivalent 3-D entropy and whale optimization algorithm. Comput. Eng. 45(4), 217–222 (2019). https://doi.org/10.19678/j.issn.1000-3428.0049933

    Article  Google Scholar 

  23. A.E. Aziz, A. Mohamed, A. Ewees, A.E. Hassanien, Whale optimization algorithm and moth-flame optimization for multilevel thresholding image segmentation. Expert Syst. Appl. 83(Oct.), 242–256 (2017). https://doi.org/10.1016/j.eswa.2017.04.023

    Article  Google Scholar 

  24. Z. Yuan, X. Xie, J. Hu, D. Yao, An efficient method for traffic image denoising. Procedia Soc. Behav. Sci. 138, 439–445 (2014). https://doi.org/10.1016/j.sbspro.2014.07.222

    Article  Google Scholar 

  25. J. Shijie, L. Yanbing, A traffic image denoising algorithm based on sparse representation. J. Dalian Jiaotong Univ. 34(005), 107–111 (2013). https://doi.org/10.3969/j.issn.1673-9590.2013.05.025

    Article  Google Scholar 

  26. C. Dongxu, Y. Yan, Study on the license plate image denoising based on adaptive coupling PDE model. Comput. Meas. Control 022(008), 2592–2594 (2014)

    Google Scholar 

  27. A. Jaiswal, J. Upadhyay, A. Somkuwar, Image denoising and quality measurements by using filtering and wavelet based techniques. AEU Int. J. Electron. Commun. 68(8), 699–705 (2014). https://doi.org/10.1016/j.aeue.2014.02.003

    Article  Google Scholar 

  28. F. Xiao, Y. Zhang, A comparative study on thresholding methods in wavelet-based image denoising. Procedia Eng. 15, 3998–4003 (2011). https://doi.org/10.1016/j.proeng.2011.08.749

    Article  Google Scholar 

  29. X. Zhang, S. Zhang, Diffusion scheme using mean filter and wavelet coefficient magnitude for image denoising. AEU Int. J. Electron. Commun. 70(7), 944–952 (2016). https://doi.org/10.1016/j.aeue.2016.04.012

    Article  Google Scholar 

  30. C. Tian, L. Fei, W. Zheng, Y. Xu, W. Zuo, C.-W. Lin, Deep learning on image denoising: an overview. Neural Netw. 131, 251–275 (2020). https://doi.org/10.1016/j.neunet.2020.07.025

    Article  Google Scholar 

  31. K. Wei, Y. Fu, H. Huang, 3-D quasi-recurrent neural network for hyperspectral image denoising. IEEE Trans. Neural Netw. Learn. Syst. 32(1), 363–375 (2021). https://doi.org/10.1109/TNNLS.2020.2978756

    Article  Google Scholar 

  32. Z. Zhang, Y. Liu, J. Liu, F. Wen, C. Zhu, Amp-net: denoising-based deep unfolding for compressive image sensing. IEEE Trans. Image Process. 30, 1487–1500 (2021). https://doi.org/10.1109/TIP.2020.3044472

    Article  MathSciNet  Google Scholar 

  33. Q. Shi, X. Tang, T. Yang, R. Liu, L. Zhang, Hyperspectral image denoising using a 3-D attention denoising network. IEEE Trans. Geosci. Remote Sens. (2021). https://doi.org/10.1109/TGRS.2020.3045273

    Article  Google Scholar 

  34. L. Jing, D. Shao, Y. Xiang, L. Ma, Z. Yang, X. Zhu, Application of BP neural network in medical ultrasound image denoising. Data Commun. 5, 18–21 (2019)

    Google Scholar 

  35. L. Yan, L. Wang, Copula estimation of distribution algorithm based on centroid and its application in image denoising. Comput. Eng. 42(2), 195–199205 (2016)

    MathSciNet  Google Scholar 

  36. H. Wang, P. Li, B. Wang, S. Zhai, N. Cai, Image deblurring restoration of BP neural network based on grey wolf algorithm. Chin. J. Liq. Cryst. Disp. 34(10), 992–999 (2019)

    Article  Google Scholar 

  37. H. Wang, Research on image restoration method based on improved gray wolf algorithm–BP neural network. Master Thesis, Ningxia University (2019). https://doi.org/10.27257/d.cnki.gnxhc.2019.000344

  38. Q. He, X. Wei, Improved whale optimization algorithm based on hybrid strategy. Appl. Res. Comput. 36, 3647–36513665 (2019). https://doi.org/10.19734/j.issn.1001-3695.2018.07.0382

    Article  Google Scholar 

  39. G.K. Arora, Chaotic whale optimization algorithm. J. Comput. Des. Eng. 5, 275–284 (2018). https://doi.org/10.1016/j.jcde.2017.12.006

    Article  Google Scholar 

  40. M.M.M. Mirjalili, Hybrid whale optimization algorithm with simulated annealing for feature selection. Neurocomputing (2017). https://doi.org/10.1016/j.neucom.2017.04.053

    Article  Google Scholar 

  41. M.A.E. Oliva, Parameter estimation of solar cells diode models by an improved opposition-based whale optimization algorithm. Energy Convers. Manag. (2018). https://doi.org/10.1016/j.enconman.2018.05.062

    Article  Google Scholar 

  42. C.F. Zhang Yong, An improved whale optimization algorithm. Comput. Eng. 44, 208–213219 (2018)

    Google Scholar 

  43. W. Long, J. Jiao, S. Cai, Improved whale optimization algorithm for solving large-scale optimization problems. Syst. Eng. Theory Pract. 37, 2983–2994 (2017)

    Google Scholar 

  44. Q. He, L. Liu, Improved whale optimization algorithm for solving function optimization problems. Appl. Res. Comput. 37, 1004–1009 (2020). https://doi.org/10.19734/j.issn.1001-3695.2018.11.072610.19734/j.issn.1001-3695.2018.11.0726

    Article  Google Scholar 

  45. M.Y. Wu Zequan, Improved whale optimization algorithm. Appl. Res. Comput. (2020). https://doi.org/10.19734/j.issn.1001-3695.2019.09.0536

    Article  Google Scholar 

  46. G.D. Zhang Shuiping, Dynamic search and cooperative learning for whale optimization algorithm. Appl. Res. Comput. (2019). https://doi.org/10.19734/j.issn.1001-3695.2019.05.0119

    Article  Google Scholar 

  47. Z. Chen, Prediction of soil parameters based on back propagation neural network optimized by genetic algorithm whale algorithm. J. Zhejiang Agric. Sci. 60(1), 125–128140 (2019). https://doi.org/10.16178/j.issn.0528-9017.20190140

    Article  Google Scholar 

  48. Q. He, K.Y. Wei, Q.S. Xu, Improved whale optimization algorithm based on hybrid strategy. Appl. Res. Comput. (2020). https://doi.org/10.19734/j.issn.1001-3695.2019.09.0528

    Article  Google Scholar 

  49. D. Li, J. Li, Y. Zhang, Z. Zeng, Gesture recognition of data glove based on PSO-improved BP neural network. Electric Mach. Control 18(8), 87–93 (2014). https://doi.org/10.15938/j.emc.2014.08.016

    Article  Google Scholar 

  50. X. Wu, N. Yao, J. Xu, Substation transformer crack image recognition based on improved neural network algorithm. Modern Electron. Tech. 40(13), 66–69 (2017). https://doi.org/10.16652/j.issn.1004-373x.2017.13.017

    Article  Google Scholar 

  51. S. Bi, Sonar image segmentation based on BP neural network optimized by genetic algorithm. Master Thesis, Inner Mongolia University (2018)

  52. K.W. Xia, C.B. Li, J.Y. Shen, An optimization algorithm on the number of hidden layer nodes in feed-forward neural network. Comput. Sci. 32(10), 143–145 (2005)

    Google Scholar 

  53. Y. Zhang, H. Chen, Y. He, M. Ye, X. Cai, D. Zhang, Road segmentation for all-day outdoor robot navigation. Neurocomputing 314, 316–325 (2018)

    Article  Google Scholar 

Download references

Funding

Authors receive research support from National Natural Science Foundation of China (Grant No. 61772180), Technological innovation project of Hubei Province 2019 (2019 AAA047), Green Industry Science and Technology Leadership Program of Hubei University of Technology (No. CPYF2018005) and Hubei province graduate education innovation plan.

Author information

Authors and Affiliations

Authors

Contributions

CW conceived the idea of the study and participated in the design and coordination and helped to revised the manuscript. RW and HY contributed to refining the ideas. ML and SW performed the research, collected the data, analyzed the results and drafted the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Shuping Wang.

Ethics declarations

Competing interests

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, C., Li, M., Wang, R. et al. An image denoising method based on BP neural network optimized by improved whale optimization algorithm. J Wireless Com Network 2021, 141 (2021). https://doi.org/10.1186/s13638-021-02013-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-021-02013-2

Keywords