Skip to main content

Precipitation cloud identification based on faster-RCNN for Doppler weather radar

Abstract

Precipitation clouds are visible aggregates of hydrometeor in the air that floating in the atmosphere after condensation, which can be divided into stratiform cloud and convective cloud. Different precipitation clouds often accompany different precipitation processes. Accurate identification of precipitation clouds is significant for the prediction of severe precipitation processes. Traditional identification methods mostly depend on the differences of radar reflectivity distribution morphology between stratiform and convective precipitation clouds in three-dimensional space. However, all of them have a common shortcoming that the radial velocity data detected by Doppler Weather Radar has not been applied to the identification of precipitation clouds because it is insensitive to the convective movement in the vertical direction. This paper proposes a new method for precipitation clouds identification based on deep learning algorithm, which is according the distribution morphology of multiple radar data. It mainly includes three parts, which are Constant Altitude Plan Position Indicator data (CAPPI) interpolation for radar reflectivity, Radial projection of the ground horizontal wind field by using radial velocity data, and the precipitation clouds identification based on Faster-RCNN. The testing result shows that the method proposed in this paper performs better than the traditional methods in terms of precision. Moreover, this method boasts great advantages in running time and adaptive ability.

1 Introduction

In recent decades, the precipitation clouds identification based on ground-based weather radar detection data, has been widely used in radar quantitative precipitation estimation, weather modification, and aviation meteorology. The capability of microwaves to penetrate cloud and rain has placed the weather radar in an unchallenged position for remotely surveying the atmosphere [1,2,3]. Doppler weather radar, referred as a greatest tool for detecting weather processes, can not only detect the distribution of precipitation clouds in the atmosphere, but also detect the movement trend of precipitation clouds [4,5,6].

In earlier research, the zero-layer bright band is mostly used as the basis for the identification of precipitation cloud types. For the region with the phenomenon of the zero-layer bright band, the precipitation clouds type defaults to stratiform cloud; otherwise it is convective cloud [7]. The limitation of these methods is that the zero-layer bright band is visible only when the stratiform cloud precipitation develops to maturity. Houze et al. [8] proposed a method to distinguish the precipitation cloud types by using rainfall gauge measurement data. When the precipitation intensity exceeds a certain fixed threshold, the precipitation cloud is considered as convective cloud; otherwise, it is stratiform cloud. The limitation of this method is that it can only determine the center of convective cloud, and it is easy to misjudge the precipitation area of convective cloud with weak precipitation intensity nearby. Churchill et al. [9] determine the convective cloud center through the radar reflectivity threshold. Moreover, precipitation clouds within a fixed radius of the convective cloud center are all convective clouds by default, otherwise they are stratiform clouds. Steiner et al. [10] pointed out the irrationality of using reflectivity threshold and influence radius to determine convective cloud region. The reflectivity threshold of the convective cloud center should be determined according to the reflectivity distribution of the region near the convective cloud center, and the influence radius should be calculated by the reflectivity of the convective cloud center. The limitation of these methods is that they do not take into account the influence of the zero-layer bright band, which leads to misdiagnosis of stratiform clouds as convective clouds. Biggerstaff et al. [11] analyzed the distribution of radar reflectivity corresponding to stratiform and convective cloud in three-dimensional space, and applied it to the identification of precipitation clouds. This method takes into account the influence of zero-layer bright band and achieves good recognition results. Therefore, this method has been widely used in the United States. In Reference [12], the authors proposed a method to distinguish the precipitation cloud types based on the difference of radar reflectivity distribution morphology between convective and stratiform precipitation clouds. In this paper, six preparative reflectivity-morphological parameters are presented, which are composite reflectivity and its horizontal gradient, echo top height associated with 35 dBZ reflectivity and its horizontal gradient, vertically integrated liquid water content and its density. Moreover, the precipitation clouds are identified by using the fuzzy logic algorithm based on the distribution of each reflectivity-morphological parameter. The accuracy of these methods depends on the reliability and quantity of the selected parameter. However, the increment of identification parameters leads to the multiplication of computation, which causes the great reduction of the efficiency of the algorithm.

All of the above methods are based on the reflectivity data, which are detected by Doppler weather radars. However, the radial velocity data detected by Doppler Weather Radar has not been applied to the identification of precipitation clouds. It is due to the radial velocity of the radar is the velocity of the precipitation particle relative to the radar, which is not sensitive to the convection movement in the vertical direction [13, 14]. In the present study, we propose a new method for precipitation clouds identification based on the deep learning algorithm. Firstly, the adaptive Barnes interpolation method is used to interpolate the radar scanning volume data to obtain the reflectivity CAPPI data at multiple altitudes. Secondly, the radial velocity data of the ground horizontal wind field is inversed by using the radial velocity data detected by Doppler weather radar at minimum elevation. Finally, the Faster-RCNN model is applied to identify the precipitation clouds based on the result above mentioned.

This paper is organized as follows: Sect. 2 describes the Datasets for deep learning training and testing. Section 3 presents the new method for precipitation clouds identification based on Faster-RCNN algorithm. Two Faster-RCNN models were constructed here, one taking the reflectivity data as input and the other taking the reflectivity data and the Doppler velocity data as input. Moreover, the traditional precipitation clouds identification method based on fuzzy logic is briefly introduced in Sect. 3.4. Section 4 evaluates the proposed method by comparing the identification results with the traditional method. Section 5 concludes this study.

2 Data

In this research, we collected a large amount of precipitation data from the CINRAD-SA dual-polarization Doppler weather radar, which is located in Guangzhou, Guangdong, China (23° 01′ N, 113° 35′ E). The radar characteristics are shown in Table 1.

Table 1 The characteristics of CINRAD-SA radar

In this paper, Faster-CNN algorithm is adopted to identify the precipitation clouds, which is a target detection algorithm based on convolution neural network (CNN). The accuracy of the final identification result depends on the number of sample sets in the training process. In order to obtain high accuracy of precipitation clouds identification, this paper constructs a rich sample set, which including 6400 training samples and 800 test samples. Moreover, the data set is selected from the precipitation detection data detected by the above radar during 2016–2018. Table 2 summarizes the datasets used in this paper. As shown in Table 2, each sample is a matrix with a size of \({400} \times {400} \times {7}\) cells, and it can be divided into three parts, which are reflectivity data (\(Z\)) after CAPPI interpolation; the radial velocity data (\(V\)) and spectral width data (\(W\)) of the ground horizontal wind fields, which are inversed by using the radar detection data at minimum elevation.

Table 2 The datasets used in the experiments

3 Methods

3.1 The CAPPI interpolation of reflectivity

The stratiform cloud is caused by the vertical rising movement of the air in a wide range. Moreover, the rising velocity is uniform and often less than the falling velocity of raindrops. In generally, stratiform cloud is larger in horizontal scale, thinner in vertical thickness, and has flat tops [15, 16]. During the radar detection, the characteristics of stratiform cloud in reflectivity data can be summarized as follows. (a) The reflectivity data are uniformly distributed and have a diffuse shape. In the process of large-scale stratiform cloud precipitation, there may be multiple reflectivity centers, but the value of reflectivity center generally does not exceed 30 dbZ. (b) The horizontal and vertical gradients of reflectivity data are small. (c) It is often accompanied by the phenomenon of the zero-layer bright band when surface precipitation particles are liquid.

The convective cloud is caused by the air vertical movement due to the instability of the atmosphere. Its horizontal scale is small, and the vertical convection is strong, often accompanied by storm, rainstorm, hail, and other disastrous weather [17]. During the radar detection, the characteristics of convective cloud in reflectivity data can be summarized as follows. (a) The reflectivity value of the convective cloud is larger than stratiform cloud, and the central reflectivity value can reach above 50 dbZ. (b) The horizontal and vertical gradients of reflectivity data are large.

In meteorological services, the CINRAD-SA radars mostly adopt the volume scan mode to obtain the three-dimensional distribution of precipitation clouds in atmosphere. As shown in Fig. 1, the radar completes 360-degree scanning at a fixed elevation, and then changes the elevation to enter the next layer scanning; Repeat the above process until all elevation layers are scanned. This process is called VCP. VCP21 is a scanning method, which is often used in the monitoring of precipitation weather. It can scan 9 elevation layers in 6 min, providing a guarantee for real-time monitoring of precipitation weather process. However, the volume scan data is distributed in a conical, and there is a large dispersion in the vertical direction. In order to obtain the distribution characteristics of precipitation cloud in both vertical and horizontal directions, it is necessary to interpolate the volume scan data in the vertical direction so as to obtain the horizontal distribution of the radar data at multiple altitudes. Moreover, the interpolated radar data is also described as CAPPI data.

Fig. 1
figure 1

The schematic diagram of PPI scan mode

Adaptive Barnes interpolation is a commonly used interpolation method, which is mainly used in discrete point interpolation with sparse distribution [18, 19]. As shown in Fig. 2, where f1~f8 are values detected by radar in 8 range bin, \(f_{0}\) is the point that needs to be interpolated. The interpolation method can be expressed as

$$f_{0} = \frac{{\sum\nolimits_{k = 1}^{8} {\omega_{k} \times f_{k} } }}{{\sum\nolimits_{k = 1}^{8} {\omega_{k} } }}$$
(1)

where \(\omega_{k}\) is the weight of \(f_{k}\), which can be defined as

$$\omega_{k} { = }\left\{ \begin{gathered} \exp \left[ { - \frac{{(r_{k} - r_{0} )^{2} }}{{K_{r} }} - \frac{{(\theta_{k} - \theta_{0} )^{2} }}{{K_{\theta } }} - \frac{{(\phi_{k} - \phi_{0} )^{2} }}{{K_{\phi } }}} \right] \, \hfill \\ {0 (}f_{k} \, {\text{is null)}} \hfill \\ \end{gathered} \right.$$
(2)

where \(r_{k}\), \(\theta_{k}\) and \(\phi_{k}\) are distance, azimuth and elevation in polar coordinates corresponding to \(f_{k}\) respectively. Assuming that the coordinate of \(f_{0}\) in cartesian coordinate system is \(\left( {X_{0} ,Y_{0} ,Z_{0} } \right)\), and the radar station is the origin of coordinates by default, then the polar coordinate \(\left( {R_{0} ,\theta_{0} ,\phi_{0} } \right)\) corresponding to \(f_{0}\) can be expressed as

$$\left\{ \begin{gathered} R_{0} = \sqrt{X_{{0}}^{{2}} { + }Y_{{0}}^{{2}} { + }Z_{{0}}^{{2}} } \hfill \\ \theta_{0} = \arctan \left(Y_{{0}} /X_{{0}}\right) \hfill \\ \phi_{0} = \arctan \left(Z_{{0}} /\sqrt {X_{{0}}^{{2}} { + }Y_{{0}}^{{2}} } \right) \hfill \\ \end{gathered} \right.$$
(3)
Fig. 2
figure 2

An illustration of adaptive Barnes interpolation

In Eq. 2, \(K_{r}\),\(K_{\theta }\) and \(K_{\phi }\) are smoothing parameters of radial distance, azimuth and elevation respectively, which can be adjusted to achieve different smoothing effects. In this paper, \(K_{\theta } { = 0}{\text{.76}}\), while \(K_{r}\) and \(K_{\phi }\) can be defined as

$$K_{r} = R^{2} \times K_{\theta }$$
(4)
$$K_{\phi } = \cos^{2} \phi \times K_{\theta }$$
(5)

The resolution of radar data in radial distance, azimuth and elevation is different during interpolation. In the radial direction, the distance between adjacent range bins is fixed. In the direction of azimuth, the distance between adjacent range bins increases with the increase of radial distance. Moreover, in the elevation direction, the distance between adjacent range bins increases with the increase of radial distance. Therefore, different interpolation points have different dependencies in three directions, so it is necessary to set smoothing parameters according to the dependencies in three directions [34]. Compared with the traditional interpolation method, the Adaptive Barnes interpolation method has better smoothing effect.

The inputs of the deep learning algorithms are mostly the data after standardization. Before using the Faster-RCNN model to identify the precipitation clouds, a standardized processing need to be conducted on the reflectivity data. Figure 3(a) is the CAPPI distribution diagram of reflectivity data at a height of 2 km, which corresponds to a reflectivity data matrix with a size of \({400} \times {400}\) cells. Figure 3(b) is the distribution diagram of reflectivity matrix after standardization, which can be summarized as

$$\left\{ \begin{gathered} \overline{X} = \frac{{X - X_{Min} }}{{1.2 \times (X_{Max} - X_{Min} )}} + 1/6 \hfill \\ \overline{X} = 0 \, {(}X \, is\,{\text{Null )}} \hfill \\ \end{gathered} \right.$$
(6)

where, \(X\) is the input to the normalized function, and \(\overline{X}\) is the output after standardization.

Fig. 3
figure 3

Standardization of radar reflectivity CAPPI data, a origin, b Standardization

3.2 Radial projection of ground horizontal wind field

As the name suggests, convective cloud have strong convective phenomena in the vertical direction, which should be an important basis for judging convective cloud. However, the velocity data detected by the Doppler weather radar is the relative velocity of cloud particles in the radial direction, and it is not sensitive to the convective motion in the vertical direction, so it is difficult to apply this feature to the identification of precipitation clouds. Convective clouds usually consist of one or more convective cells, each of which has a horizontal scale ranging from several kilometers to tens of kilometers. Moreover, the convective cells are usually marked by a tight reflectivity region or a strong updraft [20]. As shown in Fig. 4, the evolution process of convective cells usually includes three stages: growth stage, mature stage, and disappearing stage [21].

Fig. 4
figure 4

The evolution process of convective cells

As shown in Fig. 4, the convective cells growth stage is mainly controlled by the updraft, and the rising velocity of updraft increases with the height. Moreover, the updraft is mainly caused by the convergence of surface air. The mature stage of convective cells is the coexistence stage of updraft and downdraft. In this stage, the updraft in the cloud reaches the maximum, and the downdraft is mainly caused by the falling of precipitation particles. Moreover, the downdraft diffuses continuously from the convective cloud center to the periphery, forming the precipitation in a large area. The convergence and divergence fields on the ground are shown in Fig. 5. The disappearing stage of convective cells is mainly controlled by the downdraft. Precipitation at the bottom of the cloud will become weaker and weaker, and the airflow at the top of the cloud will be dominated by horizontal advection.

Fig. 5
figure 5

Simulation diagram of ground wind field

In order to understand the distribution of convergence and divergence in the radial direction of radar, the ground convergence and divergence fields are projected into the radial velocity map corresponding to the 0.5° elevation. The results are shown in Fig. 6. Figure 6a shows the PPI distribution of radial velocity corresponding to the ground convergence field. Four convergence fields with the same scale were simulated, which were distributed among the east, south, west and north directions of the radar station. The radial velocity of each convergence field is positive on the side near the radar station and negative on the side away from the radar station. The radial velocity of the convergence field gradually increases from the outside to the inside, and suddenly becomes zero at the center caused by the uplift of the convergent airflow. Moreover, the convergence field has obvious zero velocity line at the boundary of positive and negative velocity. In addition, the PPI distribution of Doppler velocity data corresponding to the divergence wind field is shown in Fig. 6b.

Fig. 6
figure 6

The PPI distribution of the radar radial velocity data corresponding to a convergence and b divergence

As mentioned above, in the growth and maturity stages of convective cells, the ground wind field is accompanied by convergence and divergence, which has obvious distribution characteristics in the radial velocity data detected by the radar at the minimum elevation. Moreover, the minimum elevation of Doppler weather radars is usually less than 0.5°, and the vertical component of the radial velocity detected by the minimum elevation is close to zero. Therefore, the radial velocity detected by the minimum elevation can be regarded as the projection of horizontal wind field on the radar radial direction. In order to maintain the consistency of the radial velocity data and the reflectivity CAPPI data in spatial distribution, the radial velocity data are projected onto the horizontal surfaces, and took it as the radial velocity data of the ground horizontal wind field.

The spectral width of the radial velocity data is an important Doppler weather radar data product, which reflects the dispersion of the radial velocity data in a range bin. Moreover, it is proportional to the variance of the velocity vectors of each scatterer in the range bin. In the convergence and divergence field, the distribution of velocity vectors in the range bin is usually chaotic, so it usually has a large spectral width. Therefore, the spectral width of the radar radial velocity data should be applied to identify the precipitation clouds. In this paper, the radial velocity data and spectral width data corresponding to the ground wind field is obtained by the projection of the radar detection data at the minimum elevation. As shown in Table 2, the size of radar radial data and spectral width data corresponding to the ground wind field is \({400} \times {400}\).

3.3 The identification of precipitation clouds base on Faster-RCNN

In recent years, deep learning has developed rapidly, and the target detection method based on deep learning has achieved good results. References [21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38] describe the achievements of deep learning in target detection. The deep learning algorithm has strong self-learning ability and can extract features from the original input data, which reflect the structural information of the input data. Therefore, compared with the traditional target detection algorithm, the target detection methods based on deep learning have more advantages in accuracy and efficiency. Moreover, the emergence of GPU provides a guarantee for the real-time performance of deep learning. In this paper, the Faster-RCNN model is adopted to identify the precipitation clouds, and its block diagram is shown in Fig. 7.

Fig. 7
figure 7

The block diagram of precipitation clouds identification based on Faster-RCNN

In order to verify the role of radar Doppler velocity data in the identification of precipitation cloud, two Faster-RCNN models with similar network structures are constructed, which are FR-Z and FR-ZVW. The FR-Z model only takes reflectivity data as input, while the FR-ZVW model takes reflectivity, radial velocity and spectral width data as input. Before identifying precipitation clouds, the radar data are preprocessed in this paper, including CAPPI interpolation of reflectivity, and radial projection of ground horizontal wind field, which are mentioned above. Faster-RCNN model is composed of Feature extraction, Region Proposal Networks (RPN) and Classification, which are described below.

3.3.1 Feature extraction

VGG-Net [34] is a classic deep learning network model, which has great advantages in CNN feature extraction. In this algorithm, multiple \({3} \times {3}\) convolution layers and \({2} \times {2}\) pooling layers are used to extract features from the original data, and the performance of the model is improved by increasing the number of layers in the network. Moreover, multiple small convolution kernels can be equivalent to a large scale convolution kernel, but the number of parameters in the network will be greatly reduced, and the nonlinear expression ability of the network will be enhanced. In this paper, the VGG16-Net is applied to extract features from radar data, and the architecture of the VGG16-Net used in this paper is shown in Fig. 8.

Fig. 8
figure 8

The architecture of the VGG16-Net

Firstly, the input layer is a matrix with a size of \({400} \times {400} \times k\)(\(k = 7\) or \(k = 5\)), where \({400} \times {400}\) corresponds to the coverage of radar data, and each resolution unit is \(0.5 \times 0.5km^{2}\). Moreover, \(k = 7\) corresponding the input layer consists of three parts, including the reflectivity CAPPI data with a size of \({400} \times {400} \times {5}\), the radial velocity data of the ground horizontal wind field with a size of \({400} \times {400}\), and the spectral width data of the ground horizontal wind field with a size of \({400} \times {400}\). \(k = 5\) corresponding to the input layer only includes the reflectivity data. The architecture and layer parameters of VGG16-Net are shown in Table 3.

Table 3 The structure of VGG16-Net

3.3.2 RPN

Target detection algorithms based on CNN, such as R-CNN, Fast-RCNN and Faster-RCNN, have achieved good results in target detection in recent years [29,30,31,32,33]. R-CNN and Fast-RCNN adopt the selective search method to generate candidate regions, which have a large number of overlaps and are time-consuming, greatly reducing the detection efficiency of the model. Faster-RCNN used a neural network to generate candidate regions, namely RPN [33], which greatly improved the efficiency of target detection. The architecture of RPN is shown in Fig. 9.

Fig. 9
figure 9

The architecture of RPN

The detailed explanation of the RPN in Fig. 9 can be seen below:

  1. 1.

    The input data is a \({25} \times {25} \times {512}\) feature map, which is obtained by VGG16-Net.

  2. 2.

    To generate region proposals, we slide a small network over the input data. This network is fully connected to a 3 × 3 spatial window of the \({25} \times {25} \times {512}\) feature map. Moreover, each sliding window is mapped to a 512-d vector.

  3. 3.

    At each sliding-window location, we simultaneously predict k region proposals (where \(k = 9\)). The 9 proposals are parameterized relative to 9 reference boxes, called anchors. Moreover, 9 anchors are generated by using 3 scales and 3 aspect ratios.

  4. 4.

    As described in (2), each region proposal can be mapped to a 512-d vector. Then, this vector is fed into two sibling fully-connected layers, which are a box-regression layer and a box-classification layer. The box-classification layer gives the probability that this region proposal belongs to the foreground or background respectively. The box-regression layer gives the location information of this region proposal.

  5. 5.

    The outputs are 300 optimal region proposals, which are selected from a large number of region proposals by using the Non-Maximum Suppression method [35].

3.3.3 Classification

The architecture of classification is shown in Fig. 10. Each optimal region proposal is mapped to the \({25} \times {25} \times {512}\) feature map to obtain the corresponding feature expression. This feature expression is divided into \({7} \times {7}\) sub-blocks according to horizontal and vertical directions, and then each sub-block is maximally sampled to obtain a feature map with a size of \({7} \times {7}\). This process is called ROI Pooling [33].

Fig. 10
figure 10

The architecture of classification

As shown in Fig. 10, any size region proposal can be normalized into a \({7} \times {7} \times {512}\) feature map by ROI Pooling. This \({7} \times {7} \times {512}\) feature map is fed into two consecutive fully-connected layers, resulting in a \({1} \times 4096\) vector. Moreover, this vector is fed into two sibling fully-connected layers, which are a box-Prediction layer and a box-classification layer. The probability and position of the final target are given by the box-classification layer and box-Prediction layer respectively.

3.3.4 Training and testing

As shown in Fig. 7, the Faster-RCNN model is composed of the detection network and the RPN network. Moreover, the two networks share the feature extraction module. The Alternating training method is applied to training the Faster-RCNN model, and it is derived from Reference [33]. The training process is as follows:

  1. 1.

    To train the RPN network, the VGG16-Net pre-training model was used to initialize the VGG16-Net (except the first convolution layers, which was randomly initialized), and the number of iterations was 10,000 times.

  2. 2.

    To train the detection network based on (1), the number of iterations was 10,000 times.

  3. 3.

    Fixed feature extraction module, fine-tuned parameters of non-shared layer in RPN network, and the number of iterations was 5000 times.

  4. 4.

    Fine-tuning the parameters of the non-shared layer in the detection network, and the number of iterations is 5000 times.

In this paper, the experimental environment with: NVIDIA Tesla P40 GPU, CUDA 9.1, Ubuntu 16.04, memory 24 GB. Moreover, the development platform is Python + Tesorflow. The training set and testing set are described in Table 2. The test results show that the precision of identification results is 96%.

3.4 The identification of precipitation clouds base on Fuzzy Logic method

In Reference [12], six preparative reflectivity-morphological parameters are presented to identify the precipitation cloud, which are desciribed as follows.

  1. 1.

    Composite Reflectivity (\(Z_{CR}\)).\(Z_{CR}\) is defined as the maximum reflectivity in the vertical direction. However, the distribution of radar volume scan data is discrete in the vertical direction. It is necessary to interpolate the reflectivity data in the vertical direction so as to obtain the maximum reflectivity.

  2. 2.

    Echo top height associated with 35 dBZ reflectivity (\(H_{ET}\)).\(H_{ET}\) is defined as the maximum height in the vertical direction with \(Z > 35\).

  3. 3.

    Vertically Integrated Liquid Water Content (\(M_{VIL}\)). \(M_{VIL}\) is defined as the sum of the liquid water mass of all drops in a unit volume, which are given by Eq. 7. Where \(\rho\) is the water density,\(D\) is the diameter of precipitation particles, and \(N\left( D \right)\) is the drop distribution of the precipitation particles.

    $$M_{VIL} { = }\frac{\pi }{6}\rho \int\limits_{0}^{\infty } {N\left( D \right) \bullet D^{3} d_{D} }$$
    (7)
  4. 4.

    Horizontal gradient of \(Z_{CR}\) (\(G_{CR}\)), which is defined as Eq. 8. Where \(\left( {i,j} \right)\) is the coordinate of \(Z_{CR}\) in grid pattern and \(n = 2\).

    $$G_{CR} = Max\left( {\left| {\lg \frac{{\left| {Z_{CR} (i + n) - Z_{CR} (i - n)} \right|}}{2n}} \right|,\left| {\lg \frac{{\left| {Z_{CR} (j + n) - Z_{CR} (j - n)} \right|}}{2n}} \right|} \right)$$
    (8)
  5. 5.

    Horizontal gradient of \(H_{ET}\) (\(G_{ET}\)), which is defined as Eq. 9. Where \(\left( {i,j} \right)\) is the coordinate of \(H_{ET}\) in grid pattern and \(n = 2\).

    $$G_{ET} = Max\left( {\left| {\lg \frac{{\left| {H_{ET} (i + n) - H_{ET} (i - n)} \right|}}{2n}} \right|,\left| {\lg \frac{{\left| {H_{ET} (j + n) - H_{ET} (j - n)} \right|}}{2n}} \right|} \right)$$
    (9)
  6. 6.

    Density of \(M_{VIL}\) (\(D_{VIL}\)), which is defined as Eq. 10. Where \(H_{Max}\) is the top height of the precipitation cloud detected by the radar, \(H_{Min}\) is the bottom height of the precipitation cloud detected by the radar.

    $$D_{VIL} = \frac{{M_{VIL} }}{{H_{Max} - H_{Min} }}$$
    (10)

Moreover, the fuzzy logic method is applied to identify the precipitation cloud based on the distribution law of \(G_{CR}\), \(G_{ET}\) and \(D_{VIL}\). This identification method has been widely used in China. Fuzzification is the transformation of the input data into the fuzzy basis by using membership function. Where, the trapmf membership function is applied, which is given by Eq. 11.

$$T\left( {X,X_{1} ,X_{2} ,X_{3} ,X_{4} } \right) = \left\{ {\begin{array}{*{20}c} 0 & {X < X_{1} } \\ {\frac{{X - X_{1} }}{{X_{2} - X_{1} }}} & {X_{1} < X < X_{2} } \\ 1 & {X_{2} < X < X_{3} } \\ {\frac{{X_{4} - X}}{{X_{4} - X_{3} }}} & {X_{3} < X < X_{4} } \\ 0 & {X > X_{4} } \\ \end{array} } \right.$$
(11)

where \(X\) is the input, and \(X_{1 - 4}\) correspond to the four inflection points of the trapmf membership function. Moreover, the trapmf membership functions corresponding to the stratiform cloud are given by Fig. 11.

Fig. 11
figure 11

The trapmf membership functions corresponding to the stratiform cloud

Rule inference is the weighted sum of multiple fuzzy bases, which is given by Eq. 12.

$$R = \sum\limits_{j = 1}^{3} {W_{j} \bullet MBF(X_{j} )}$$
(12)

where, \(MBF_{j}\) (\(j = 1,2,3\)) represents the degree of membership function corresponding to the three input parameters, \(W_{j}\) is the weighting of the j-th input parameter (\(W_{{G_{CR} }} { = }0.4\),\(W_{{D_{VIL} }} { = }0.3\) and \(W_{{G_{ET} }} { = }0.3\)), and \(R\) can be understood as the probability that the precipitation cloud corresponding to the input parameters is stratified cloud. Therefore, when \(R \ge 0.5\), the output is stratiform cloud, otherwise the output is convective cloud. The advantage of this method is that the classification results based on the degree of membership function rather than the specific values. Moreover, this method is not limited by the statistical formula and the final result is not affected by the inaccurate value of some parameters.

4 Results and discussions

In this paper, the performance of the new identification methods is evaluated by the data detected in a mixed cloud precipitation process by using the CINRAD-SA dual-polarization Doppler weather radar. Moreover, the traditional identification methods proposed in Reference [12] is also applied to identify the precipitation cloud corresponding to the mixed cloud precipitation process.

On 19 March 2016, a heavy rainfall process was detected by the CINRAD-SA radar, which is located in Guangzhou, Guangdong, China (23°01′N, 113°35′E). The experimental data were detected from 12 continuous volume scan in the period from 2342UTC on March 19, 2016 to 0054UTC on March 20, 2016. Moreover, the interval time of adjacent volume scan is 6 min. The reflectivity CAPPI data are inverted by using the Adaptive Barnes interpolation method, which is introduced in Sect. 3.1. Figure 12a is the reflectivity CAPPI distribution map at 2–6 km altitudes. The top of the Figure is the original CAPPI distribution map, and the bottom is the standardized CAPPI distribution map. The paper only processes the data in the area within 100 km. The data resolution is 500 × 500 m, and the distance between adjacent circles is 25 km. Figure 12b is the radial velocity distribution map of the ground horizontal wind field. Figure 12c is the spectral width data distribution map of the ground horizontal wind field. Each Figure contains two parts, the original distribution map on the left, and the standardized distribution map on the right.

Fig. 12
figure 12

The pre-processed radar data corresponding to 2342UTC, a CAPPI distribution map of the reflectivity at 2–6 km altitudes, b Radial velocity distribution map of the ground horizontal wind field, c Spectral width data distribution map of the ground horizontal wind field

As shown in Fig. 12a, the red area on northeast of the radar station is a heavy precipitation area. The reflectivity of this area reaches over 40dbZ, and the central reflectivity of this area reaches 60dbZ. In the horizontal direction, the radar reflectivity decreases gradually with the outward direction of the reflectivity center, and the reflectivity at 20 km away from the reflectivity center decreases to 30dbZ. It reflects that the reflectivity of this region has a large gradient in the horizontal direction. In the vertical direction, the radar reflectivity reaches its maximum value at the height of 3 km. As the height increases or decreases, the reflectivity will gradually decrease, which reflects that the reflectivity of this region also has a large gradient in the vertical direction.

As shown in Fig. 12b, the left side of the radar station is southeast wind, and the right side is southwest wind. It is accompanied by slight convergence. As described in Sect. 3.2, it can be inferred that this time is the middle and late period of convective cloud precipitation. The radial velocity has obvious abrupt change in the heavy precipitation area above mentioned. Moreover, the radial velocity decreases gradually when it closes to the precipitation center. In Fig. 12c, the spectral width data of this heavy precipitation area gradually increases when it closes to the precipitation center. Combined with the characteristics above mentioned, the precipitation cloud corresponding to the heavy precipitation region should be convective cloud.

The identification result of FR-Z model corresponding to 2342UTC is shown in Fig. 13a, where the input only includes the reflectivity data. The selected area in the red box is the convective cloud, and the annotation above the red box is the probability. Moreover, the identification result of FR-ZVW model is shown in Fig. 13b, where the input includes reflectivity, radial velocity and spectral width data. It can be seen that the identification results are basically consistent with the analyses above. In order to verify the validity of this method proposed in this paper, we use the fuzzy logic method to identify the precipitation clouds corresponding to 2342UTC, and the identification results are shown in Fig. 14a. The red areas are convective clouds, and other precipitation cloud areas are stratiform clouds by default.

Fig. 13
figure 13

The identification result of precipitation clouds corresponding to 2342UTC. a FR-Z, b FR-ZVW

Fig. 14
figure 14

The identification result of precipitation clouds corresponding to 2342UTC. a Traditional method, b comparison

The identification results of the three methods are compared, and the comparison results are shown in Fig. 14b. The red and blue boxes are corresponding to the convective cloud areas, which are displayed in Fig. 13a and Fig. 13b, respectively. Moreover, the red areas correspond to the convective cloud displayed in Fig. 14a, which are identified by the fuzzy logic method. It can be seen that the identification results of the fuzzy logic method and the FR-Z method are basically consistent. However, the identification result of FR-ZVW method is different from the other two methods for the small-scale strong echo region at the bottom right. As shown in Fig. 12, the radial velocity distribution of this region is uniform, without convergence and divergence, and the spectral width is small. The reflectivity of this region has a large gradient in the horizontal and vertical directions, but the reflectivity tends to weaken. It can be inferred that the region should be at the end of the convective cloud precipitation. Therefore, the identification results of FR-ZVW method are more reasonable. In addition, the above three methods are used to identify the precipitation clouds corresponding to the 12 continuous volume scan data. The identification results are shown in Fig. 15.

Fig. 15
figure 15

The identification results of precipitation clouds corresponding to 2342UTC-0054UTC

As shown in Fig. 15, the identification results of the three methods are basically consistent. There are some small red areas that their reflectivity has a large gradient in the horizontal and vertical directions and tend to decrease. However, these regions have no convection characteristics in the velocity field. The identification results of FR-ZVW method are more reliable. In addition, the testing set in Table 2 is used in the evaluation of the three identification methods, and the evaluation results are shown in Table 4. The Recall and Precision listed in Table 4 are defined in Eqs. 13 and 14 respectively.

$${\text{Recall}} = \frac{{C_{{{\text{Identified}}}} }}{{{\text{C}}_{{{\text{Total}}}} }}$$
(13)
$${\text{Precision}} = \frac{{C_{{{\text{Correct}}}} }}{{C_{{{\text{Identified}}}} }}$$
(14)

where \({\text{C}}_{{{\text{Total}}}}\) is the total number of convective clouds in the testing set, \(C_{{{\text{Identified}}}}\) is the number of identified convective clouds, and \(C_{{{\text{Correct}}}}\) is the correct number of convective clouds in \(C_{{{\text{Identified}}}}\). It can be seen that the fuzzy logic method has higher recall, but poor precision. However, the Faster-RCNN methods, including FR-Z and FR-ZVW, have higher precision, but sacrifices the recall. Compared with the FR-Z method, the FR-ZVW method is based on multiple radar detection parameters, which makes the identification results more reliable.

Table 4 The comparison results of identification methods

5 Conclusions

With continued threat from severe convection weather, the identification of precipitation clouds is crucial to weather forecast and aviation meteorology. Traditional identification methods utilize some parameters calculated from radar reflectivity to identify the precipitation clouds, which are based on the differences of radar reflectivity distribution morphology between strati-form and convective clouds in three-dimensional space. These methods have a common shortcoming that the radial velocity data detected by Doppler Weather Radar has not been applied to the identification of precipitation clouds because it is insensitive to the convective movement in the vertical direction. However, the convective movement in the vertical direction is an important feature for precipitation clouds identification. Section 3.2 draws the conclusion that the ground horizontal wind field is often accompanied by convergence and divergence during the growth and mature stages of convective cells. Moreover, the radial velocity detected by the Doppler weather radar at minimum elevation can be regarded as the projection of horizontal wind field on the radar radial direction. Therefore, the radial velocity data detected by the Doppler weather radar at minimum elevation can be used to the identification of precipitation clouds.

In this paper, a new method for precipitation clouds identification was described, which makes full use of the data products of Doppler weather radar, including reflectivity, radial velocity and spectrum width of radial velocity. In order to verify the role of Doppler radial velocity data in the identification of precipitation cloud, two Faster-RCNN models with similar network structures are constructed, which are FR-Z and FR-ZVW. The FR-Z model only takes reflectivity data as input, while the FR-ZVW model takes reflectivity, radial velocity and spectral width data as input. Moreover, the traditional precipitation clouds identification method based on fuzzy logic is also applied to verify the accuracy of new identification methods. Experiments to evaluate the proposed method showed that the new method has great advantages in efficiency and precision compared with the traditional methods. Moreover, the identification results of FR-ZVW is more reliable than FR-Z. But the distribution map of the results identified by the fuzzy logic method is more refined.

This study is the first one that reports applying the radial velocity data detected by the Doppler weather radar to the identification of precipitation clouds. Moreover, the deep learning algorithm was applied to identify the precipitation clouds. Compared with traditional identification methods, this method achieves better results in efficiency and accuracy. However, the training process of Faster-RCNN model usually requires large labeled data and powerful computational resources. It is obvious that the training samples described in Sect. 3.2 are insufficient. Future work should be focused on collecting more samples for training and testing of the Faster-RCNN model. Moreover, most of the running time of the above methods is spent on the the interpolation of radar volume scan data. Future work should minimize the running time of the precipitation clouds identification system by optimizing the interpolation algorithms.

Availability of data and materials

The datasets supporting the conclusions of this article are private, and it came from the CMA Key Laboratory of Atmospheric Sounding, Chengdu, Sichuan, China.

References

  1. R.A. Brown, Doppler weather radar, in Encyclopedia of Natural Hazards (2013)

  2. T. Schuur, A. Ryzhkov, P. Heinselman, et al. Observations and Classification of Echoes with the Polarimetric WSR-88D radar (2003)

  3. J. Evans, D. Turnbull, Development of an automated windshear detection system using Doppler weather radar. Proc. IEEE 77(11), 1661–1673 (1989)

    Article  Google Scholar 

  4. V.N. Bringi, V. Chandrasekar, Polarimetric Doppler weather radar: principles and applications. Atmos. Res. 63(1), 159–160 (2002)

    Google Scholar 

  5. Z. Zhang, G. Zheng, M. Wei, et al. Research on the system of advanced X-band Doppler weather radar with dual-linear polarization capability, In 2012 IEEE Fifth International Conference on Advanced Computational Intelligence (ICACI) (IEEE, 2012), pp. 739–741

  6. J. Hershberger, T. Pratt, R. Kossler. Automated calibration of a dual-polarized SDR radar, in Antenna Measurements & Applications (IEEE, 2017)

  7. P.M. Austin, A.C. Bemis, A quantitative study of the ‘BRIGHT Band’ in radar precipitation echoes. J. Atmos. Sci. 7(2), 145–151 (1950)

    Google Scholar 

  8. A. Robert, A. Houze Jr., Climatological study of vertical transports by cumulus-scale convection. J. Atmos. Sci. 30(6), 1112–1123 (1973)

    Article  Google Scholar 

  9. D.D. Churchill, R.A. Houze Jr., Development and structure of winter monsoon cloud clusters On 10 December 1978. J. Atmos. Sci. 41(6), 933–960 (1992)

    Article  Google Scholar 

  10. M. Steiner, R.A. Houze, S.E. Yuter, Climatological characterization of three-dimensional storm structure from operational radar and rain gauge data. J. Appl. Meteorol. 34(9), 1978–2007 (1995)

    Article  Google Scholar 

  11. M.I. Biggerstaff, S.A. Listemaa, An improved scheme for convective/stratiform echo classification using radar reflectivity. J. Appl. Meteorol. 39(12), 2129–2150 (1985)

    Article  Google Scholar 

  12. X. Yanjiao, L. Liping, Identification of stratiform and convective cloud using 3D radar reflectivity data. Chin. J. Atmos. Sci. 31(4), 645–654 (2007)

    Google Scholar 

  13. O. Bousquet, M. Chong, A multiple-Doppler synthesis and continuity adjustment technique (MUSCAT) to recover wind components from Doppler radar measurements. J. Atmos. Ocean. Technol. 15(2), 343–359 (1998)

    Article  Google Scholar 

  14. D.C. Dowell, A. Shapiro, Stability of an iterative dual-Doppler wind synthesis in Cartesian coordinates. J. Atmos. Ocean. Technol. 20(11), 1552–1559 (2003)

    Article  Google Scholar 

  15. S.A. Rutledge, W.A. Petersen, Vertical radar reflectivity structure and cloud-to-ground lightning in the stratiform region of MCSs: further evidence for in situ charging in the stratiform region. Mon. Weather Rev. 122(8), 1760 (1994)

    Article  Google Scholar 

  16. S.Y. Matrosov, T. Uttal, D.A. Hazen, Evaluation of radar reflectivity based estimates of water content in stratiform marine clouds. J. Appl. Meteorol. 43(43), 405–419 (2004)

    Article  Google Scholar 

  17. S. Kumar, G.S. Bhat, Vertical structure of radar reflectivity in deep intense convective clouds over the tropics, in Noaa Satelite Conference (2016)

  18. P.M. Pauley, X. Wu, The theoretical, discrete, and actual response of the Barnes objective analysis scheme for one- and two-dimensional fields. Mon. Weather Rev. 118(5), 1145–1164 (1990)

    Article  Google Scholar 

  19. H. Yun-Xian, Z. Ying, Comparison of interpolation schemes for the Doppler weather radar data. Remote Sens. Inf. 21(2), 39–45 (2008)

    Google Scholar 

  20. D. Rosenfeld, Y. Mintz, Evaporation of rain falling from convective clouds as derived from radar measurements. J. Appl. Meteorol. Climatol. 27(3), 209–215 (1988)

    Article  Google Scholar 

  21. A.J. Illingworth, The formation of rain in convective clouds. Nature 336(6201), 754–756 (1988)

    Article  Google Scholar 

  22. C.L.P. Chen, Deep learning for pattern learning and recognition, in 2015 IEEE 10th Jubilee International Symposium on Applied Computational Intelligence and Informatics (SACI) (IEEE, 2015), pp. 17–17

  23. Y. Lecun, Y. Bengio, G. Hinton, Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  24. J. Schmidhuber, Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2014)

    Article  Google Scholar 

  25. P. Felzenszwalb, D. Mcallester, D. Ramanan. A discriminatively trained, multiscale, de-formable part model, in IEEE Conference on Computer Vision and Pattern Recognition (2008).

  26. Y. Sun, X. Wang , X. Tang, Deep convolutional network cascade for facial point detection, in 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2013)

  27. J. Yan, L. Zhen, L. Wen, et al. The fastest deformable part model for object detection, in Computer Vision & Pattern Recognition (2014)

  28. P. Sermanet, D. Eigen, X. Zhang, et al. OverFeat: integrated recognition, localization and detection using convolutional networks. Eprint Arxiv (2013)

  29. Krizhevsky A , Sutskever I , Hinton G . ImageNet Classification with Deep Convolutional Neural Networks. Advances in neural information processing systems, 2012, 25(2).

  30. R. Girshick, J. Donahue, T. Darrell, et al. Rich feature hierarchies for accurate object detection and semantic segmentation, in 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE Computer Society, 2014)

  31. K. He, X. Zhang, S. Ren et al., Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2014)

    Article  Google Scholar 

  32. R. Girshick. Fast R-CNN. Computer Science (2015)

  33. S. Ren, K. He, R. Girshick, et al. Faster R-CNN: towards real-time object detection with region proposal networks, in International Conference on Neural Information Processing Systems (2015)

  34. K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, in Computer Science (2014).

  35. A. Neubeck, L.J.V. Gool, Efficient non-maximum suppression, in 18th international conference on pattern recognition (ICPR 2006), 20–24 August 2006, Hong Kong, China (IEEE Computer Society, 2006)

  36. J. Xia, Ke. He, Xu. Wei, S. Zhang, L. Fan, G.K. Karagiannidis, A MIMO detector with deep learning in the presence of correlated interference. IEEE Trans. Veh. Technol. 69(4), 4492–4497 (2020)

    Article  Google Scholar 

  37. J. Xia, L. Fan, Xu. Wei, X. Lei, X. Chen, G.K. Karagiannidis, A. Nallanathan, Secure cache-aided multi-relay networks in the presence of multiple eavesdroppers. IEEE Trans. Commun. 67(11), 7672–7685 (2019)

    Article  Google Scholar 

  38. J. Wang, Z. Na, X. Liu, Collaborative design of multi-UAV trajectory and resource scheduling for 6G-enabled internet of things. IEEE Internet Things J. (2020). https://doi.org/10.1109/JIOT.2020.3031622

    Article  Google Scholar 

Download references

Acknowledgements

We would like to sincerely thank to the workmates of CMA Key Laboratory of Atmospheric Sounding, locating in Chengdu University of Information Technology, including the teachers, Xu Wang, and Tao Liu, for their help.

Funding

This research was funded by the National Natural Science Foundation of China (Joint Fund of Civil Aviation Research, award number U1733103).

Author information

Authors and Affiliations

Authors

Contributions

HW conceived the study. YR designed the study and wrote the manuscript. JW marked the samples. LT collected the data and revised the manuscript. XL drew the figures and improved the structure of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Haijiang Wang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ran, Y., Wang, H., Tian, L. et al. Precipitation cloud identification based on faster-RCNN for Doppler weather radar. J Wireless Com Network 2021, 19 (2021). https://doi.org/10.1186/s13638-021-01896-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-021-01896-5

Keywords