Skip to main content

Learning group patterns for ground-based cloud classification in wireless sensor networks

Abstract

Cloud classification of ground-based images is a challenging task due to extreme variations under different atmospheric conditions. With the development of wireless sensor networks (WSN), it provides the possibility to understand and classify clouds more accurately. Recent research has focused on extracting discriminative cloud image features in WSN, which plays a crucial role in achieving competitive classification performance. In this paper, a novel feature extraction algorithm by learning group patterns in WSN is proposed for ground-based cloud classification. The proposed descriptors take texture resolution variations into account by cascading the salient local binary pattern (SLBP) information of hierarchical spatial pyramids. Through learning group patterns, we can obtain more useful information for cloud representation in WSN. Experimental results using ground-based cloud databases demonstrate that the proposed method can achieve better results than the current methods.

1 Introduction

Clouds play an important role in the earth’s radiation budget because of their absorption and scattering of solar and infrared radiation, and their change is an important influence factor of climate change [1, 2]. Most of cloud-related studies requires the technology of ground-based cloud observation, such as ground-based cloud classification [3, 4], cloud cover evaluation (or cloud fraction) [5], and cloud height measure. Among, ground-based cloud classification has attracted much attention from the research community. It is because successful cloud classification can improve the precision of weather prediction and help us to understand climatic development [6]. Clouds are currently studied using both satellites and ground-based weather stations. Some work focuses on classification clouds based on satellite images [7]. However, the information extracted from large-scale satellite images fails to capture the details of cloud because these images generally possess low resolution. On the contrary, ground-based cloud observations are able to obtain richer, more accurate retrievals of cloud information. Nowadays, ground-based clouds are classified by the observers who are trained professionally. However, different observers will obtain discrepant classification results due to a different level of professional skills. Furthermore, this work is complicated and time-consuming. Hence, the technique of automatic ground-based cloud classification is a challenging task and is still under development.

The ground-based sky-imaging devices have been widely used for obtaining information on sky conditions. Typical devices, including WSI (whole sky imager) [8, 9], TSI (total sky imager) [10] and ICI (infrared cloud imager) [11], can provide continuous sky images from which one can infer cloud macroscopic properties. Traditionally, the cloud classification techniques handle cloud images captured from only one image sensor.

Recently, wireless sensor networks (WSN) have attracted a lot of attention, particularly with the development of smart sensors [12, 13]. WSN can be applied in many fields including remote environmental monitoring and object classification. When each image sensor serves as a sensor node, WSN can be employed to classify clouds. In this paper, we focus on cloud classification in WSN.

Based on the above devices, a lot of methods have been proposed for ground-based cloud classification [3, 9, 14]. Singh and Glennen used co-occurrence matrix and autocorrelation to extract features from common digital images for cloud classification [15]. Calbó and Sabburg applied statistical texture features and pattern features based on a Fourier spectrum to classify eight predefined sky conditions [16]. Heinle et al. proposed an approach to extract spectral features and some simple textural features, such as energy and entropy for a fully automated classification algorithm, in which seven different sky conditions are distinguished [9]. Zhuo et al. [17] proposed the color census transform to capture texture and color information for cloud classification. Although these works are suggestive, many important problems for ground-based cloud classification have not yet been explored. For example, the extracted features are not discriminative enough to describe the ground-based cloud images, which might lead to poor classification performance.

Clouds can be thought of one kind of a natural texture, and it is reasonable for ground-based cloud images to be handled with texture classification methods. As one kind of classical texture descriptors, local binary pattern (LBP) [18] is particularly popular due to its simplicity and efficiency, and various extensions are made for the conventional LBP descriptors [14, 19–21]. Due to their excellent performances, LBP and its extensions have been successfully utilized in image classification and face recognition [22–24]. The uniform patterns of LBP code (the uniform LBP for short) have been proposed as a means of improving the performance of LBP-based methods. However, the uniform LBP patterns do not account for a high proportion of the patterns in cloud images; therefore, they cannot capture the fundamental properties of these images. Liao et al. [21] proposed dominant LBP (DLBP) as an improved strategy to solve this problem, and the method based on DLBP only considered the pattern occurrences of salient patterns, while the type of pattern information is lost. Liu et al. [14] proposed a salient LBP (SLBP) descriptor which takes advantage of the most frequently occurring patterns to capture descriptive information. Although SLBP is effective for handling the disadvantage for conventional LBP, its basic assumption is that texture resolution of an image is fixed as shown in Fig. 1a. Actually, texture patches in a cloud image can be with various resolutions as shown in Fig. 1b. Each of the two examples shown in Fig. 1a is with same resolution. While the texture resolutions of each images shown in Fig. 1b are varying significantly. Usually, compared with various resolutions, texture information in a fixed resolution does not have a significant discriminative power.

Fig. 1
figure 1

Ground-based cloud image patches with various resolutions. a Each image is with same resolution, and the images are with same content but with various resolutions. b The texture resolution are varying in each cloud image patch

In order to obtain the resolution information of cloud image in WSN, we learn the SLBP for each resolution, and then put all the patterns together to form the final representation. Specifically, we propose a novel feature extraction algorithm by learning group patterns (LGP) for ground-based cloud classification in WSN. The proposed descriptors take texture resolution variations into account by cascading the SLBP information of hierarchical spatial pyramids. Through learning group patterns, we can obtain more useful information for cloud representation in WSN.

The rest of this paper is organized as follows. In Section 2, the SLBP is briefly overviewed. and then the group patterns with pyramid representation is introduced in detail in Section 3. In Section 4, experimental results and discussions are given. Finally, conclusions are drawn in Section 5.

2 Brief review of SLBP

Conventional LBP proposed by Ojala et al. [18] is considered as an effective descriptor for texture classification. The LBP operator labels each pixel in the image by computing the sign of the differences between the central pixel and its neighboring pixels. The result is a decimal number computed by the corresponding binary string. Then, the image is represented by the histogram of these decimal numbers. The LBP value for the central pixel is computed as

$$ \text{LBP}_{P,R}^{ri}=\min_{0\leq l<P}\left\{\sum\limits^{P-1}_{p=0}s(g_{p}-g_{c})\times 2^{[(p+l)\text{mod}~P]}\right\} $$
((1))

where p c represents the gray value of the central pixel, p n (n=0,⋯,N−1) denotes the gray value of the neighboring pixel on a circle of radius R, and N is the total number of neighbors. Suppose the coordinate of pc is (0,0), then the coordinates of p n are (R cos(2π n/N),R sin(2π n/N)). The gray values of neighbors that are not in the image grids can be calculated by interpolation. The step function s(x) is described with s(x)=1 if x≥0 and s(x)=0 otherwise. The minimum value in Eq. (1) denotes the label of the rotation invariant LBP at the central pixel.

Let N denote the total number of rotation invariant LBP patterns. According to the definition in Eq. (1), the value of N is determined by neighboring samples P. In order to reduce the interference of noise, Ojala et al. [18] defined the U value at each pixel as the number of bitwise transitions between 0 and 1 in the LBP:

$$ \begin{aligned} U(\text{LBP}_{P,R})=&\,|s(g_{P-1}-g_{c})-s(g_{0}-g_{c})| \\ &+ \sum\limits^{P-1}_{p=1}|s(g_{P}-g_{c})-s(g_{p-1}-g_{c})| \end{aligned} $$
((2))

Although its effective to noise, the uniform LBP patterns do not account for a high proportion of the patterns in cloud images; therefore, they can not capture the fundamental properties of these images. To ensure the robustness of feature representation, the most salient LBP descriptor [14] is proposed as the following steps. First, Liu et al. [14] build a rotation-invariant LBP histogram for every cloud image, and then accumulate all of these histograms into a single histogram. Finally, Liu et al. sort the histogram in descending order. The first several patterns in this sorted histogram are the most frequently occurring patterns in the cloud images, which are defined as the salient patterns. The minimum value k of determining the salient patterns are calculated by:

$$ k=\arg\min_{S}\left(\frac{\sum^{k-1}_{j=0}H[j]}{\sum_{j}H[j]}\right)\geq T $$
((3))

Here, H[1,2,…] denotes the sorted histogram of all rotation invariant patterns, and T is a threshold determining the proportion of salient patterns. We empirically set T=80 %. The salient patterns of class i by solving Eq. (3) are denoted as S[i].

3 The proposed learning group patterns

In order to capture the hierarchical spatial pyramids information of cloud images in WSN, the proposed learning group patterns descriptors take texture resolution variations into account by cascading the SLBP information. Specifically, we learn the SLBP for each resolution, and then put all the patterns together to form the final representation. Pyramid transform is an effective multi-resolution analysis approach. In this paper, we represent a salient local binary pattern in a spatial pyramid domain.

During pyramid transform, each pixel in the low spatial pyramid is obtained by down sampling from its adjacent high-resolution image as shown in Fig. 2. Thus, in the low-resolution images, a pixel corresponds to a region in its high resolutions. Sequential pyramid images are constructed as shown in Fig. 2. Each neighboring two images are with resolution variation rate 4. That is to say, the down sampling ratios in x and y directions are both 2.

Fig. 2
figure 2

The diagram of pyramid sampling in neighboring 3 resolutions

Let f(x , y) denote the original image. The pyramids of adjacent two resolutions are determined as follows:

$$ Q_{1}(x,y) = f(x,y) ~ ~ ~\mathrm{for ~ ~ level}~ ~ l=1 $$
((4))

The pyramids of adjacent two resolutions are determined as follows:

$$ Q_{l}(x,y) = \sum\limits_{i}\sum\limits_{j} Q_{l-1}(R_{x}+i,R_{y}+j) $$
((5))

where R x and R y are the down sampling ratios in x and y directions, respectively. R x R y >1 means down sampling is utilized during pyramid image generation, while R x =R y =1 means no sampling is utilized.

Let S k represents the texture information of the kth pyramid (k=1,…,N) and g k denote the corresponding center pixel of the kth pyramid. The pyramid texture descriptor T is the combination of the texture of all resolution levels which is expressed as follows:

$$ \begin{aligned} S=(S^{1}, ~S^{2}, \cdots, S^{N})=t&\left({g_{c}^{1}}, ~{g_{1}^{0}}, {g_{1}^{1}}, \cdots, g_{1}^{p-1}; ~\cdots;~ {g_{c}^{N}}, \right.\\ &\quad\left.{g_{N}^{0}}, {g_{N}^{1}}, \cdots, g_{N}^{p-1}\right) \end{aligned} $$
((6))
$$ S^{k}=t\left({g_{c}^{k}}, ~{g_{k}^{0}}, {g_{k}^{1}}, \cdots, g_{k}^{p-1}\right) $$
((7))

According to Liu et al. [14], S k can be obtained by Section 2. Finally, the learning group patterns is the combination of the SLBP histograms of the N spatial pyramid images as follows:

$$ \begin{aligned} \text{LGP}_{P,R} &= \bigcup\limits_{k}\text{SLBP}_{P,R,k} \\ &= (\text{SLBP}_{P,R,1};\text{SLBP}_{P,R,2};\cdots;\text{SLBP}_{P,R,N}) \end{aligned} $$
((8))

Through learning group patterns, we can obtain more useful information for cloud representation in WSN.

4 Experimental results and analysis

In this section, the proposed LGP is compared with the representative LBP [18], local ternary patterns (LTP) [19], DLBP [21] and SLBP [14] algorithms. To evaluate the effectiveness of our algorithm in WSN, a series of experiments are carried out. First, we will introduce two ground-based cloud databases captured in WSN: the Kiel database and the IapCAS-E database. Second, the experimental setup is described. Third, the experimental results in WSN on two databases are provided.

4.1 Database

The Kiel database is provided by Kiel University in Germany. The key equipment for capturing the ground-based images is a camera equipped with a fisheye lens which can provide a field of view larger than 180°. The camera is set to capture one cloud image per 15 s. More information about the camera can be found in literature [25]. In our algorithm, phenomenological classes are used to separate the sky conditions according to the international cloud classification system published in the World Meteorological Organization (WMO), and the database is divided into seven classes. The sample number of each class is different and the total number is 1500. This database has large illumination variations and intra-class variation. Samples for each class are shown in Fig. 3.

Fig. 3
figure 3

Cloud samples from the Kiel database: 1. cumulus; 2. cirrus and cirrostratus; 3. cirrocumulus and altocumulus; 4. clear sky; 5. stratocumulus; 6. stratus and altostratus; and 7. cumulonimbus and nimbostratus

The other database is the IapCAS-E which is provided by the Institute of Atmospheric Physics, Chinese Academy of Sciences. The cloud images in the IapCAS-E database are more challenging due to the influence factors of aerosol and noise. The division rules of the IapCAS-E database is consistent with the Kiel database. The sample number of each class is also different, and the total number is 2000. Figure 4 shows the samples from each class.

Fig. 4
figure 4

Cloud samples from the IapCAS-E database: 1. cumulus; 2. cirrus and cirrostratus; 3. cirrocumulus and altocumulus; 4. clear sky; 5. stratocumulus; 6. stratus and altostratus; and 7. cumulonimbus and nimbostratus

4.2 The experimental setup

For fair comparison, we use the same experimental setup for all the experiments. Each ground-based cloud image is converted to gray scale, and then normalized to an average intensity of 128 with a standard deviation of 20. The chi-square distance and nearest neighbor classifier are used. The chi-square distance is described as follows:

$$ D(T,S) = \sum\limits_{i=1}^{M}\frac{(T_{i}-T_{s})^{2}}{T_{i}+T_{s}} $$
((9))

where T and S are the histogram features of two cloud images, M is the number of bins (or M is the dimension of feature), and T i and S i are the values of the histograms T and S at the ith bin, respectively. Quantitative evaluations of all the above algorithms are performed as:

$$ \text{Acc} = \frac{N_{C}}{N}\times 100\,\% $$
((10))

where N C is the number of correctly classified cloud images in all seven classes. N is the total number of ground-based cloud images. Note that all the experimental results are computed based on Eq. 10. In each experiment, one fifth of the samples are randomly chosen from each class as training data while the remaining images are used for testing, and the process is repeated 100 times. The average accuracy over these 100 randomly splits is reported as the final results for reliability.

4.3 Results analysis

We first evaluate the effectiveness of the proposed method on the Kiel database. Table 1 lists the average accuracy values of our method with the other comparing methods. The following four points can be drawn through analyzing the experimental results. First, the proposed method achieves the highest classification accuracy. Second, the performance of our method is over 12 % higher than that of LBP and 8 % higher than that of DLBP because the proposed method takes advantage of the most frequently occurring patterns to capture descriptive information of cloud image. Third, comparing the proposed method with the SLBP approach, we can see that the former is over 5 % higher than the latter ones on the accuracy. It indicates that adding texture resolution variation information is helping to improve the classification performance. In addition, the confusion matrix shown in Fig. 5 provides a detailed summary of the performance of the proposed LGP algorithm.

Fig. 5
figure 5

Confusion table of our method on the Kiel database

Table 1 Average classification accuracy for the five algorithms on the Kiel database

The second experiment is conducted on the IapCAS-E database. This database is more challenging because it shows large intra-class variation, and the experimental setup is the same as the Kiel database. Figure 4 shows samples from different classes. The experimental results as shown in Table 2 and Fig. 6 demonstrate that our method achieves the best results on this challenging database. We draw the similar conclusions with that on the Kiel database, which again proves the effectiveness of the proposed method.

Fig. 6
figure 6

Confusion table of our method on the IapCAS-E database

Table 2 Average classification accuracy for the five algorithms on the IapCAS-E database

5 Conclusions

In this paper, a novel feature extraction algorithm by learning group patterns in WSN is proposed for ground-based cloud classification. The proposed descriptors take texture resolution variations into account by cascading the SLBP information of hierarchical spatial pyramids. Through learning group patterns in WSN, we can obtain more useful information for cloud representation. Compared to the conventional LBP descriptors and SLBP descriptors, the pyramid representation for local binary patterns shows its effectiveness. The experimental results show that our method achieves better results than previous ones on ground-based cloud classification in WSN.

References

  1. A Taravat, FF Del, C Cornaro, S Vergari, Neural networks and support vector machine algorithms for automatic cloud classification of whole-sky ground-based images. IEEE Geosci. Remote Sensing Lett.12(3), 666–670 (2014).

    Article  Google Scholar 

  2. I Yanovsky, AB Davis, Separation of a cirrus layer and broken cumulus clouds in multispectral images. IEEE Trans. Geosci. Remote Sensing. 53(5), 2275–2285 (2015).

    Article  Google Scholar 

  3. S Liu, C Wang, B Xiao, Z Zhang, X Cao, Tensor ensemble of ground-based cloud sequences: its modeling, classification and synthesis. IEEE Geosci. Remote Sensing Lett.10(5), 1190–1194 (2013).

    Article  Google Scholar 

  4. A Kazantzidis, P Tzoumanikas, AF Bais, S Fotopoulos, G Economou, Cloud detection and classification with the use of whole-sky ground-based images. Atmos. Res.113:, 80–88 (2012).

    Article  Google Scholar 

  5. J Yang, WT Lv, Y Ma, W Yao, QY Li, An automatic ground based cloud detection method based on adaptive threshold. J. Appl. Meteorological Sci.20(6), 713–721 (2013).

    Google Scholar 

  6. U Feister, H Möller, T Sattler, J Shields, U Görsdorf, J Güldner, Comparison of macroscopic cloud data from ground-based measurements using vis/nir and ir instruments at Lindenberg, Germany. Atmos. Res.92(2), 395–407 (2010).

    Article  Google Scholar 

  7. N Lamei, KN Hutchison, MM Crawford, N Khazenie, Cloudtype discrimination via multispectral textural analysis. Optical Eng.33(4), 1303–1313 (1994).

    Article  Google Scholar 

  8. JE Shields, ME Karr, TP Tooman, in Proceedings of Eighth Atmospheric Radiation Measurement Science Team Meeting. The whole sky imager a year of progress, (1998), pp. 11–16.

    Google Scholar 

  9. A Heinle, A Macke, A Srivastav, Automatic cloud classification of whole sky images. Atmos. Measurement Tech.3(1), 557–567 (2010).

    Article  Google Scholar 

  10. CN Long, JM Sabburg, Calbó, J, D Pagès, Retrieving cloud characteristics from ground-based daytime color all-sky images. J. Atmos. Oceanic Technol.23(5), 633–652 (2006).

    Article  Google Scholar 

  11. JA Shaw, B Thurairajah, in Proceedings of Thirteenth ARM Science Team Meeting. Short-term arctic cloud statistics at nsa from the infrared cloud imager, (2003).

    Google Scholar 

  12. Q Liang, X Cheng, SC Huang, D Chen, Opportunistic sensing in wireless sensor networks: theory and application. IEEE Trans. Comput.63(8), 2002–2010 (2014).

    Article  MathSciNet  Google Scholar 

  13. Q Liang, Radar sensor wireless channel modeling in foliage environment: UWB versus narrowband. IEEE Sensors J.11(6), 1448–1457 (2011).

    Article  Google Scholar 

  14. S Liu, C Wang, B Xiao, Z Zhang, Y Shao, Salient local binary pattern for ground-based cloud classification. Acta Meteorologica Sinica. 27(2), 211–220 (2013).

    Article  Google Scholar 

  15. M Singh, M Glennen, Automated ground-based cloud recognition. Pattern Anal. Applic, 258–271 (2005).

    Article  MathSciNet  Google Scholar 

  16. J Calbó, J Sabburg, Feature extraction from whole-sky groundbased images for cloud-type recognition. J. Atmos. Ocean. Tech., 3–14 (2008).

    Article  Google Scholar 

  17. W Zhuo, Z Cao, Y Xiao, Cloud classification of ground-based images using texture–structure features. J. Atmos. Ocean. Tech.31(1), 79–92 (2014).

    Article  Google Scholar 

  18. T Ojala, M Pietikäinen, T Mäenpää, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell.24(7), 971–987 (2002).

    Article  MATH  Google Scholar 

  19. X Tan, B Triggs, Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans. Image Process.19(6), 1635–1650 (2010).

    Article  MathSciNet  Google Scholar 

  20. L Zheng, S Wang, Q Tian, Coupled binary embedding for large-scale image retrieval. IEEE Trans. Image Process.23(8), 3368–3380 (2014).

    Article  MathSciNet  Google Scholar 

  21. Z Guo, L Zhang, D Zhang, A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process.19(6), 1657–1663 (2010).

    Article  MathSciNet  Google Scholar 

  22. Z Zhang, C Wang, B Xiao, W Zhou, S Liu, Cross-view action recognition using contextual maximum margin clustering. IEEE Trans. Circuits Syst. Video Technol.24(10), 1663–1668 (2014).

    Article  Google Scholar 

  23. L Zheng, S Wang, Z Liu, Q Tian, Fast image retrieval: query pruning and early termination. IEEE Trans. Multimedia. 17(5), 648–659 (2015).

    Article  Google Scholar 

  24. Z Zhang, C Wang, B Xiao, W Zhou, S Liu, Attribute regularization based human action recognition. IEEE Trans. Inform. Forensics Secur.8(10), 1600–1609 (2013).

    Article  Google Scholar 

  25. J Kalisch, A Macke, Estimation of the total cloud cover with high temporal resolution and parametrization of short-term fluctuations of sea surface insolation. Meteorologische Zeitschrift. 17(5), 603–611 (2008).

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China under Grant No. 61401309, and No. 61501327, Natural Science Foundation of Tianjin under Grant No. 15JCQNJC01700 and Doctoral Fund of Tianjin Normal University under Grant No. 5RL134 and No. 52XB1405.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhong Zhang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, S., Zhang, Z. Learning group patterns for ground-based cloud classification in wireless sensor networks. J Wireless Com Network 2016, 69 (2016). https://doi.org/10.1186/s13638-016-0564-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-016-0564-x

Keywords