Skip to main content

A new method based on stacked auto-encoders to identify abnormal weather radar echo images

Abstract

It is not denied that real-time monitoring of radar products is an important part in actual meteorological operations. But the weather radar often brings out abnormal radar echoes due to various factors, such as climate and hardware failure. So it is of great practical significance and research value to realize automatic identification of radar anomaly products. However, the traditional algorithms to identify anomalies of weather radar echo images are not the most accurate and efficient. In order to improve the efficiency of the anomaly identification, a novel method combining the theory of classical image processing and deep learning was proposed. The proposed method mainly includes three parts: coordinate transformation, integral projection, and classification using deep learning. Furthermore, extensive experiments have been done to validate the performance of the new algorithm. The results show that the recognition rate of the proposed method can reach up to more than 95%, which can successfully achieve the goal of screening abnormal radar echo images; also, the computation speed of it is fairly satisfactory.

Introduction

Doppler weather radar is a kind of monitoring tool for small and medium catastrophic weather, so its measurement accuracy is very important for weather forecast. However, because of the external electromagnetic interference and the failure of transmitter-receiver system, the weather radar usually outputs erroneous data and abnormal echo images. Also, the anomalies are not easy to identify and control for the business workers, so it is necessary to put forward a solution for the data quality control. At present, 143 operational new generation weather radars are running in China, so it will be a heavy workload to recognize anomalies from the huge amounts of radar data artificially. For these reasons, it is of great significance to achieve the automatic identification of abnormal echo images from the radar data.

In terms of detecting anomalies of weather radar echoes, many scholars have done much work. Some of them achieve the identification based on classical image processing methods. For example, Chen et al. [1] put forward a set of method to deal with the abnormal radar echo through extracting feature of texture. Weijer and Schmid [2] extended the description of image features with color information, so as to better accomplish feature extraction. However, the main drawback of these methods is complicated and inefficient. The other scholars prefer applying the artificial intelligence (AI) [3] algorithm to identify the anomalies. For instance, Nan and Chong [4] accomplished the automatic recognition of radar echo by means of traditional machine learning, but its recognition efficiency is not very high.

In 2006, Geoffrey Hinton firstly proposed the concept of deep learning [5] and pointed out that it is a set of algorithms in machine learning based on learning representations of data. Deep models included convolutional neural network (CNN), stacked auto-encoder (SAE), deep belief network (DBN), etc. For the past few years, deep learning has been widely used in image processing because it requires less human intervention.

In this paper, we propose a new method combining the theory of classical image processing and deep learning to realize automatic identification of radar anomaly products, and this method is suitable for all weather radars. We utilize a deep learning framework—SAE—due to its superiority in feature representation. In addition, the integration projection theory [6, 7] is performed to extract features, which improves computing speed and recognition rate.

Data

The radar echo images used in this article were acquired from different weather stations across China.

The abnormal radar echo images were mainly divided into three types, namely as super refraction, arc shape, and radial shape. All of the training sample data are 800 pieces, including 226 pieces of normal radar echo images and 574 pieces abnormal radar echo images, among which 175 pieces are super refraction, 173 pieces are in the shape of arc, and 226 pieces are in shape of radial. The size of each piece was 460 × 460 pixels. Also, it is worthy that the ratio of numbers of four kinds of radar echo images is about 1.3:1:1:1.3, and there is no much difference among them, which will not lead to the phenomenon of over-fitting.

Figure 1 a is the example in super refraction, b is the example in arc shape, and c is the example in radial shape. In order to make a better and clearer illustration, the samples listed above are easy to distinguish, but in fact the majority of the abnormal echo images are hard to tell the difference, especially the kinds of super refraction.

Fig. 1
figure1

Examples of abnormal radar echo images

Methods

The general framework of algorithm

The goal of this paper is to achieve detecting and classifying the abnormal radar echo images automatically, and we will combine the traditional image processing and deep learning to realize it. The flow chart of new algorithm is shown in Fig. 2.

Fig. 2
figure2

Flow chart of proposed method

As is shown in Fig. 2, the original abnormal radar echo images are first through a median filter and then converted into log-polar coordinates [8] from Descartes coordinates. Afterwards, the pictures in log-polar coordinates will be conducted integration projection, the results of which will be as the inputs of SAE, a deep learning model. After training by SAE, we can get the results of classification.

Coordinate transformation

As is mentioned above, the first step of whole algorithm is coordinate transformation, changing the picture into log-polar coordinates.

The idea of showing the image in the perspective of log-polar coordinates is inspired from the biological vision system. The so-called log-polar coordinate is a two-dimensional coordinate system, which is just based on the polar coordinate and increases the “log”, namely the log operation. In this coordinate system, the coordinate of a point is decided by a real pair (ρ, θ).

$$ \left\{\begin{array}{l}\rho =\log \sqrt{x^2+{y}^2}\\ {}\theta ={\tan}^{-1}\left(y/x\right)\end{array}\right. $$
(1)

where ρ is the logarithm of the distance between this point and a particular point (the origin), and θ is the angle between a reference line (such as the X axis) and the straight line which touches this point and the origin.

The application of log-polar coordinate in image processing is becoming more and more extensive. The logarithmic polar coordinates can bring more convenience than Descartes rectangular coordinate system in image feature extraction. In order to illustrate it better, we took an abnormal echo image as the experimental sample, making descriptions from the perspective of image rotation [9].

Figure 3 shows the pictures before rotation, including the picture (a) in Cartesian coordinate system and the picture (b) in log-polar coordinate system. Figure 4 shows the results rotated back 90° clockwise based on Fig. 3. It can be observed that there is a greater change for the picture in Cartesian coordinate system than that in log-polar coordinate system.

Fig. 3
figure3

The pictures before rotation

Fig. 4
figure4

The pictures after rotation

In order to further show the characteristic on resisting rotation of image feature under the log-polar coordinate system, we performed quantify analysis of this characteristic by obtaining the Zernike moment [10] which usually can be treated as the feature to describe the object shape. In order to define the Zernike moment, the concept of Zernike functions will be introduced. The (p, q) order Zernike function [11] is defined as

$$ {V}_{pq}\left(x,y\right)={R}_{pq}\left(\rho \right)\exp \left( jq\theta \right),\kern0.5em {x}^2+{y}^2\le 1 $$
(2)

where \( \rho =\sqrt{x^2+{y}^2} \) is the distance between the origin and the pixel (x, y) and θ = arctan(y/x) is the angle between the vector and the x axis. In (2), Rpq(ρ) is a polynomial in ρ of degree p ≥ 0, containing no power of ρ lower than |q|. The integer q is positive, negative, or zero, and it must satisfy

$$ \left|q\right|\le p $$
(3)

where p − |q| is an even number.

The orthogonality relation for {Vpq(x, y)} is

$$ \int \underset{D}{\int }{V}_{pq}^{\ast}\left(x,y\right){V}_{p\hbox{'}q\hbox{'}}\left(x,y\right) dxdy=\frac{\pi }{p+1}{\delta}_{pp\hbox{'}}{\delta}_{qq\hbox{'}} $$
(4)

where δpp' = 1 if p = p' and 0 otherwise.

Due to the orthogonality and completeness of {Vpq(x, y)}, any square integrable image function f(x, y) can be defined as follows:

$$ f\left(x,y\right)=\sum \limits_{p=0}^{\infty}\sum \limits_{q=-p}^p{\tau}_p{A}_{pq}{V}_{pq}\left(x,y\right),\kern0.5em p-\left|q\right|=\mathrm{even} $$
(5)

where τp is a constant, and

$$ {\tau}_p=\frac{p+1}{\pi } $$
(6)

Thus, the Zernike moment Apq is as follows:

$$ {A}_{pq}=\int \underset{D}{\int }f\left(x,y\right){V}_{pq}^{\ast}\left(x,y\right) dxdy $$
(7)

Table 1 shows the Zernike moment values of the original and rotated images in two coordinates system.

Table 1 Comparison of Zernike moment between the original and rotated images in two coordinates system

The normalized value in Table 1 is the result of Zernike moment normalizing by maximum, and the difference value is the result of subtraction by normalized values between adjacent rotation angles.

As is shown in Table 1, the difference values respectively are 13%, 1%, 1%, and 14% on the condition that the rotation angles are 30°, 45°, 60°, and 90° under the log-polar coordinates. And the difference values respectively are 35%, 4%, 11%, and 9% under the Cartesian coordinate system. It can be seen that the changes of Zernike moment are smaller in log-polar coordinate system than that in Cartesian coordinate system. Moreover, more samples have been tested for validation. Hence, the coordinate transformation is necessary before detecting and classifying the abnormal radar echo images.

Integration projection

After achieving the coordinate transformation, the second step is to extract the image feature by integration projection for SAE model training. Its theory is as follows: assuming that I(x, y) is the gray value of one point (x, y), the functions of vertical and horizontal integration projection are as follows:

$$ {\displaystyle \begin{array}{l}{S}_v(x)={\int}_{y_1}^{y_2}I\left(x,y\right) dy\\ {}{S}_h(y)={\int}_{x_1}^{x_2}I\left(x,y\right) dx\end{array}} $$
(8)

We employed integration projection in two directions to ensure better expression of the feature of images after coordinate transformation. Thus, we can get two features of each image for SAE training.

Figures 567, and 8 show examples of integration projection. The left one is the result of projection in horizontal direction, and the right one is the result of projection in vertical direction. Large amounts of experimental results demonstrate that different types of abnormal radar echo images have different waveform characteristics. In terms of the projection results in horizontal direction, the morphological characteristic of this four types follow the different function rules. But as for the projections results in vertical direction, the difference between images in arc shape and in radial shape is pretty small, which need automatic learning algorithm to extract representation.

Fig. 5
figure5

Example of integration projection based on normal echo images

Fig. 6
figure6

Example of integration projection based on images with super refraction

Fig. 7
figure7

Example of integration projection based on images in arc shape

Fig. 8
figure8

Example of integration projection based on images in radial shape

Stacked auto-encoder (SAE)

Bengio [3] has shown that a deep or hierarchical architecture is useful to find highly non-linear and complex patterns in data. Motivated by the study, in this paper, we consider a SAE, in which an auto-encoder (AE) becomes a building block, for a latent feature representation [12] to recognize anomalies in radar echo images. Also, one of the most important peculiarities of SAE is to find highly non-linear and complicated relations among input features.

An auto-encoder is a neural network which can reproduce the input signals as far as possible. It is defined by three layers: input layer, hidden layer, and output layer. Multilayer AEs is called Stacked Auto-Encoders (SAEs), which is one type of deep learning models we used. We constructed the model utilizing cascaded auto-encoders, taking the outputs of the hidden unit of the lower layer as the input to the upper layer’s input units.

The SAE model consists of two parts: encoders and decoders. The encoding part of SAE maps the original feature through a hierarchical representation to a low dimensional compressed representation [13, 14]. Let DH and DI denote, respectively, the number of hidden and input units in a neural network. Given a set of training samples, \( X={\left\{{x}_i\in {R}^{D_I}\right\}}_{i=1}^N \). Let ϕ(x) be a non-linear activation function in this case:

$$ \phi (x)=\frac{2}{1+{e}^{-2x}}-1 $$
(9)

So the latent representation yi through ϕ(x) is as follows:

$$ {y}_i(x)=\phi \left({w}_0+\sum \limits_{i=1}^N{w}_i{x}_i\right) $$
(10)

which can also be written

$$ {y}_i(X)=\phi \left(\mathrm{w}{X}^T\right) $$
(11)

A layer in the network consists of N nodes

$$ y(X)=\phi \left({WX}^T\right) $$
(12)

where W is an encoding weight matrix.

For the decoding part, it is composed of two layers: the hidden layer and output layer. The output layer has a linear activation function, thus

$$ y(X)={W}_d\phi \left({W}_eX\right) $$
(13)

where We is the parameters of the encoding layer, and Wd is the parameters of the decoding layer.

Let ESAE and DSAE be the encoder respective decoder parts of the SAE model; then, the reconstruction of a sample xn is defined as

$$ {\hat{x}}_n={D}_{SAE}\circ {E}_{SAE}\left({x}_n\right) $$
(14)

where is a function composition operator. Let en be the error of a sample xn, thus

$$ {e}_n={x}_n-{\hat{x}}_n $$
(15)

The mean-square error is defined as

$$ \varepsilon =\frac{1}{N}\sum \limits_{n=1}^N{\left\Vert {e}_n\right\Vert}_2^2 $$
(16)

The mean-square error ε can indicate the performance of the SAE model.

Figure 9 shows the SAE model used in this study with three auto-encoders stacked hierarchically. It is worth noting that the number of units in the input layer is equal to the dimension of the input feature vector. In addition, the number of hidden units can be determined according to the input, but it better be larger than the dimension of the input. Here, we set up three hidden layers. For the SAE model, it includes two parts: stacked auto-encoder and softmax classifier [15]. The stacked auto-encoder can realize encoding and decoding automatically, and the softmax classifier is equal to a neural network.

Fig. 9
figure9

SAE model

For the traditional neural network, its mechanism of training parameters including the weight matrices and the biases is back-propagation, which turns out to be a failure for the deep network due to its falling into a poor local optimum easily. However, the SAE model uses a greedy layer-wise learning to train the parameters. The key idea of this algorithm is to train one layer at a time by maximizing the variational lower bound [5]. That is to say, the result of the lth hidden layer is treated as the input for (l + 1)-th hidden layer.

Focusing on the ultimate goal to classify the abnormal radar echo images, we optimize the deep network in a supervised manner. In order for that, we stack an output layer on the top of the SAE model, as is shown in Fig. 9. This top layer is used to represent the class-label of the input, and it is so-called softmax classifier, which trains the network by back-propagation with gradient descent. The supervised optimization is called “finetuning”, which can reduce the risk of falling into local poor optimum. Table 2 is the algorithm summary of a stacked auto-encoder.

Table 2 The algorithm of a stacked auto-encoder

Experiments and results

The transformation and integration algorithm in this paper were implemented by in-house code through MATLAB 2014a. And the SAE model was implemented based on achievement by Palm in 2012 [16].

A critical problem for classifier design is feature extraction and selection. Saberian and Vasconcelos proposed an algorithm named SOP-Boost [17], which was based on boosting and a pool of simple features, to achieve classification, and showed the superior performance over previous boosting methods. Thus, it was necessary for us to make comparisons between the two classifiers: SAE and SOP-Boost. As is well-known, there are two standards—recognition accuracy and computation speed—to judge whether an algorithm is good or not. So we will make two groups of comparison both of them.

The recognition accuracy

There are many methods of feature extraction in image processing, such as the methods based on pixels color, texture, shape, and so on. In order to highlight the superiority of integration projection, a method of feature extraction used in this paper, we also have made a detailed comparison of all these methods. Tables 3 and 4 show the classification results of five feature extraction methods in the recognition mode of SOP-Boost and SAE, respectively.

Table 3 The recognition accuracy using different methods of feature extraction in SOP-Boost
Table 4 The recognition accuracy using different methods of feature extraction in SAE

The color method is based on color histogram of CBIR_colorhist, totally 256. The texture method is to extract the features of radar echoes by texture, and its dimension is 1 × 256. The color + texture method realizes the feature extraction by using CBIR_colorhist + texture. As for the color + texture + shape method, it uses three characteristics of color, texture, and shape, and its dimension is 1 × 576.

Both the results of Tables 3 and 4 indicate that the recognition rate using method of integral projection is superior to that of other methods, in either SOP-Boost or SAE. In terms of the original picture, which is without any feature extraction in advance, the rates respectively are 55% and 77.59%. Therefore, for radar echo pictures, it really shows, using integral projection to extract feature is more efficient and has higher recognition rate than the other methods. Also, in the aspect of choosing classifier, the SAE model does better than SOP-Boost, and the former’s identification rate is about 3% higher than the latter’s.

Besides, we also select 550 pictures of abnormal radar echo, including 150 pieces of normal images, 120 pieces of super refraction, 160 pieces of arc shape, and 120 pieces of radial shape, to test the recognition accuracy of the algorithm combining integral projection and SAE. Table 5 shows the results.

Table 5 The recognition accuracy of testing samples

The testing samples above are non-repetitive with the training samples. It can be seen that the method proposed in this paper has high recognition rate for all types of radar echo pictures. Among them, the recognition rate for super refraction can reach 98.33%, the rate for pictures in arc shape is 96.25% and the rate for pictures in radial shape is 91.67%. The average of them is 95.41%.

In conclusion, the algorithm come up with by this paper to detect and classify the radar echo pictures performs very well on the recognition accuracy.

The comparison of computation speed

We also conduct several experiments using training samples to see the results of computation speed for all the methods both SAE model and SOP-Boost. Table 6 displays the comparison of computation time, and each of computation time is the average of multiple results.

Table 6 The comparison of computation time

We can see from Table 6, for SAE mode, the computation time without feature extraction is about 4 h, and that of integration projection is just 2 min. Obviously, the latter’s computation speed is 120 times faster than the former’s. Also, we can see that, the computation time of SOP-Boost respectively are 3 min without feature extraction, and 1 min using integration projection. It is clear that the computation speed using method of integration projection for SAE mode is almost as good as that of SOP-Boost. Thus, in the case of recognition rate and computation speed being taken into account at the same time, the SAE model is fairly satisfactory.

Summary and conclusions

In this work, we propose an abnormal radar echoes recognition method combining the theory of image processing and deep learning. The results of the experiments show the proposed method is really effective in recognizing the anomalies of radar echo images. Furthermore, the method also overcomes the shortcoming that the traditional feature extraction methods are not enough to describe the radar echo pictures’ information in detail, and it significantly improves the recognition rate and computation speed. Also, we compare it with SOP-Boost. As it turns out that the proposed method performs better than SOP-Boost in recognition accuracy, and its computation speed is satisfactory as well.

However, there still exist several things which need to be improved. First, in order to further improve the recognition performance, we can optimize the feature of pictures on the basis of integral projection, such as the size or number of the wave peak and trough, the slope of the waveform, and so on. Second, we only do the analysis of three kinds of abnormal radar echoes, which cannot meet the demand of locating the abnormal part of radar. Thus, the more types of radar echoes will be studied in following research. Finally, with the development of deep learning, we can also apply the more efficient models to recognition and classification in the future, which will perhaps achieve a better result.

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

References

  1. 1.

    X.W. Chen, B. Zhou, Y. Guo, F. Xu, Q. Zhao, Structure guided texture inpainting through multi-scale patches and global optimization for image completion. Sci. China Inf. Sci 57(1), 1–16 (2014)

    Google Scholar 

  2. 2.

    J.V.D. Weijer, C. Schmid, Coloring local feature extraction[C]// In ECCV, 2006. MENSINK et al.: TMRF FOR IMAGE AUTOANNOTATION (2006), pp. 334–348

    Google Scholar 

  3. 3.

    Y. Bengio, Learning deep architectures for AI. Foundations & Trends® in Machine Learning 2(1), 1–55 (2009)

    Article  Google Scholar 

  4. 4.

    H. Nan, P. Chong, Automatic identification system of abnormal radar echoes based on image processing technology. Meteorol. Sci. Technol 41(6), 993–997 (2013)

    Google Scholar 

  5. 5.

    G.E. Hinton, R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks. Science (New York) 313(5786), 504–507 (2006)

    MathSciNet  Article  Google Scholar 

  6. 6.

    L.H. Ren, K. Liu, H.Y. Zhang, et al., Rectangle detection of gray projection integral extreme value method. Comput. Eng, 38(8): 159–160 (2012).

  7. 7.

    M. Rees, S.P. Ellner, Integral projection models for populations in temporally varying envi-ronments. Ecol. Monogr 79(4), 575–594 (2009)

    Article  Google Scholar 

  8. 8.

    S. Fabio, C. Manuela, P.S. Silvio, Design strategies for direct multi-scale and multi-orientation feature extraction in the log-polar domain. Sci. Dir 33(1), 41–51 (2012)

  9. 9.

    S. Arivazhagan, K. Gowri, L. Ganesan, Rotation and scale-invariant texture classification using log-polar and ridgelet transform. J. Pattern. Recognit. Res 1, 131–139 (2010)

  10. 10.

    Z.W. Yang, T. Fang, Research on image normalization based on Zernike moment. Comp. Eng 30(12), 34–36 (2004)

  11. 11.

    S.X. Liao, M. Pawlak, On the accuracy of Zernike moments for image analysis. IEEE Trans Pattern Anal Machine Intell 20(12), 1358–1364 (1998)

    Article  Google Scholar 

  12. 12.

    H.I. Suk, S.W. Lee, D. Shen, Latent feature representation with stacked auto-encoder for AD/MCI diagnosis. Brain Struct. Funct 220(2), 841–859 (2015)

    Article  Google Scholar 

  13. 13.

    D. Gustavsson, N. Wadströmer, Non-linear hyperspectral subspace mapping using stacked auto-encoder[C]// The Swedish Artificial Intelligence Society (2016)

    Google Scholar 

  14. 14.

    Y.Y. Li, L.H. Zhou, et al., Change detection in synthetic aperture radar images based on log-mean operator and stacked auto-encoder. 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 3090–3096 (2017).

  15. 15.

    Salakhutdinov R, Hinton G. Replicated Softmax: an undirected topic model. Adv. Neural Inf. Process. Syst. 1607–1614 (2009).

  16. 16.

    R.B. Palm, Prediction as a candidate for learning deep hierarchical models of data. Tech Univ Denmark, 87, 2012.

  17. 17.

    M.J. Saberian, N. Vasconcelos, Boosting algorithms for simultaneous feature extraction and selection[C]// Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on IEEE (2012), pp. 2448–2455

    Google Scholar 

Download references

Funding

This work is supported by The National Key Research and Development Program of China research fund (No. 2017YFC1501701).

Author information

Affiliations

Authors

Contributions

Ling Yang participated in the design of the study and performed the statistical analysis. Yun Wang participated in drafting the manuscript, conceived of the deep learning algorithm, and participated in its application in radar echo images. Zhongke Wang participated in algorithm realization of coordinate transformation and image collection. Yang Qi participated in the design and integration projection. Yong Li was responsible for the article program debugging. Zhipeng Yang participated in the English correction and integration of the paper. Wenle Chen made key amendments to important thesis content. All authors read and approved the final manuscript.

Authors’ information

Ling Yang was born in 1974. She received a Dr. Eng. in the School of Life Science, University of Electronic Science and Technology of China in 2012.

From 2014 up to now, she has been as a professor at the College of Electronic Engineering, Chengdu University of Information Technology. She is the author of more than 50 articles. Her research interest includes signal and image processing, multi-source data fusion technology, and meteorological observation technology.

Yun Wang was born in Linfen, Shanxi Province, China, in 1992. She received a M.S. degree in signal and information processing from the Chengdu University of Information Technology in 2018.

Zhongke Wang received a B.S. degree in the Chengdu University of Meteorology in 1994. Now, he is an associate professor at the College of Information Security Engineering, Chengdu University of Information Technology. His interests are image processing and new weather radar algorithms.

Qi Yang was born in Yilong County, Sichuan Province, China, in 1991. He received a M.S. degree in meteorological detection technology from the Chengdu University of Information Technology in 2018. He is currently pursuing a Ph.D. degree in earth detection and information technology at the Chengdu University of Technology.

Yong Li received a M.S. degree in signal and information processing from the Chengdu University of Information Technology in 2016. His interests are image processing and machine learning algorithm. Zhipeng Yang holds a postgraduate degree and a doctorate degree. He is currently an associate professor at the College of Electronic Engineering, Chengdu University of Information Technology. He mainly studies image processing. Wenle Chen received a M.S. degree in signal and information processing from the Chengdu University of Information Technology in 2017.

Corresponding author

Correspondence to Zhongke Wang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Yang, L., Wang, Y., Wang, Z. et al. A new method based on stacked auto-encoders to identify abnormal weather radar echo images. J Wireless Com Network 2020, 177 (2020). https://doi.org/10.1186/s13638-020-01769-3

Download citation

Keywords

  • Radar echo image
  • Coordinate transformation
  • Integration projection
  • Deep learning
  • Recognition