Skip to main content

Advertisement

Experimental analysis and evaluation of wide residual networks based agricultural disease identification in smart agriculture system

Abstract

Specialised pest and disease control in the agricultural crops industry have been a high-priority issue. Due to great cost-effectiveness and efficient automation, computer vision (CV)–based automatic pest or disease identification techniques are widely utilised in the smart agricultural systems. As rapid development of artificial intelligence, in the field of computer vision–based agricultural pest identification, an increasing number of scholars have begun to move their attentions from traditional machine learning models to deep learning techniques. However, so far, deep learning techniques still have been suffering from many problems such as limited data samples, cost-effectiveness of network structure, and high image quality requirements. These issues greatly limit the potential utilisation of deep-learning techniques into smart agricultural systems. This paper aims at investigating utilization of one new deep-learning model WRN (wide residual networks) into CV-based automatic disease identification problem. We first built up a large-scale agricultural disease images dataset containing over 36,000 pieces of diseases, which includes typical types of disease in tomato, potato, grape, corn and apple. Then, we analysed and evaluated wide residual networks algorithm using the Tesla K80 graphics processor (GPU) in the TensorFlow deep-learning framework. A set of comprehensive experimental protocols have been designed in comparing with GoogLeNet Inception V4 regarding several benchmarks. The experimental results indicate that (1) under WRN architecture, Softmax loss function gives a faster convergence and improved accuracy than GoogLeNet inception V4 network. (2) While WRN shows a good effect for identification of agricultural diseases, its effectiveness has a strong need on the number of training samples of dataset like at least 36 k images in our experiment. (3) The overall performance is better than 800 sheets. The disease identification results show that the WRN model can be applied to the identification of agricultural diseases.

Introduction

Specialised pest and disease prevention for crops industry have been a highly-priority agricultural issue in many countries. Agricultural pests and diseases have great harm to agricultural production. Agricultural pest control has always been an important link in agricultural production, and the effective identification and monitoring of agricultural pests and diseases is the basis for the prevention and control of pests and diseases. Due to great cost-effectiveness and efficient automation, computer vision (CV)–based automatic pest or disease identification techniques [1,2,3,4] are widely utilised in the smart agricultural systems. The technique and solution of CV-based real-time collection of agricultural pests and diseases and the use of remote intelligent diagnosis is an important research direction.

Disease identification is a typical research problem of object detection and recognition. Typical object detection and recognition methods including steps: feature extraction, object recognition and object positioning. Early works for insect or disease identification was done by Zayas and Flinn’s [5] RGB multispectral analysis and the method proposed by Weeks et al. [6] by principle component analysis (PCA) algorithm. Also traditional object detection and classification approaches like OpenCV [7], SVM [8], KNN [9] and other machine learning models [10, 11] have been widely used.

In recent years, with the rapid development of deep learning technique [12,13,14,15], it has made some progress in the field of agricultural pest and disease image recognition. In the field of CV-based object detection study, an increasing number of scholars have begun to move their attentions from traditional machine learning models to deep learning techniques. The study problem has gradually been based on deep-learning goals for object detection and recognition. Especially, the research of convolutional neural networks (CNN) [12, 13] became mainstream technology. Many sorts of algorithms based on CNN have emerged to significantly improve current systems performance for not only classification but also detection. For instance, a multi-layer convolutional neural network was designed [14] to identify the pathological image of the body; Zhang et al. [15] built an 8-layer convolutional neural network model for training and testing on the self-expanding blade library. Besides, deep CNN architecture could also achieve automatic feature extraction. Also, wide residual network- [16], AlexNet- [17], GoogleNet- [14], and ResNet [15]-based convolutional neural network structure and related research have been carried out.

Among above CNN architectures, GoogleNet is widely recognised as the state-of-the-art of CNN-based architecture for achieving the tasks of classification and detection via large-scale image datasets. In our paper, identification and recognition of disease via mobile images is a typical object detection and classification problem, where GoogleNet would be suitable for utilisation and investigation for this task. But, so far deep learning techniques still have been suffering from many problems such as limited data samples, cost-effectiveness of network structure, and high image quality requirements. These issues greatly limit the potential utilisation of deep learning techniques into smart agricultural systems. One typical problem in GoogleNet is that how to speed up traditional deep-learning technique with satisfied accuracy. A wide residual network (WRN) [16] algorithm would widen the ResNet blocks but reduced its depth. WRN has achieved improved classification performance with significantly fewer network layers. Moreover, WRN overcomes the problem of diminishing feature reuse to a certain extent, and it is several times faster to train than the very deep ResNet. Thus, this paper aims at investigating utilization of one new deep-learning model WRN (wide residual networks) into CV-based automatic disease identification problem.

We first built up a large-scale agricultural disease images dataset containing over 36,000 pieces of diseases, which includes typical types of disease in tomato, potato, grape, corn and apple. Then, we analyzed and evaluated wide residual networks algorithm using the Tesla K80 graphics processor (GPU) in the TensorFlow deep-learning framework. A set of comprehensive experimental protocols have been designed in comparing with GoogLeNet Inception V4 regarding several benchmarks. The experimental results indicate that (1) under WRN architecture, Softmax loss function gives a faster convergence and improved accuracy than GoogLeNet inception V4 network. (2) While WRN shows a good effect for identification of agricultural diseases, its effectiveness has a strong need on the number of training samples of dataset like at least 36 k images in our experiment. (3) The overall performance is better than 800 sheets. The disease identification results show that the WRN model can be applied to the identification of agricultural diseases.

This paper takes the diseases of tomato, grape, potato, corn and other major crops as the research object, and takes 36,000 pictures of common agricultural diseases, which are collected and marked in the field, as the data set. Based on TensorFlow, the focus is introduced into wide residual networks convolutional neural network architecture, training disease data in dataset, it establishes an agricultural disease identification model, which is based on WRN convolutional neural network model, and provides a reference method of the new application for deep-learning in the field of agricultural pests and diseases.

The rest of this paper is organized as follows: section II presents the materials and methods. Section III describes the proposed experimental research approach with the details of key protocols developed. Section IV reports the experimental results on the benchmark. Finally, the conclusion is drawn in section V.

Materials and methods

Multiple disease image dataset

For agriculture insect identification, there exist a few open databases released such as Butterfly Dataset [18]. However, to our best knowledge, few suitable datasets that cover multiple disease image data are released while our purpose is to detect different kinds of insects simultaneously in one image. It is mainly because the acquisition of digital high quality images of pest and disease in natural scene is very difficult due to many issues like, diversity and distribution of insects, lights and illuminations, and limits of weather or environments. As a result, we attempt to establish an open source disease and pest database as shown in Fig. 1. The picture data source of agricultural disease used in this paper is from the “National Agricultural and Rural Big Data Center Yunnan Sub-center”. The data are collected from the field to the laboratory, and the 24-bit depth picture of centralized shooting and post-processing is performed by Canon EOS 80D. And the data are labeled and screened by professionals to avoid errors and duplication of data.

Fig. 1
figure1

Diseases image samples acquired by Canon EOS 80D

The data source includes 10 varieties of grapes, potatoes, tomatoes, corn, etc., involving Apple Scab, Apple Frogeye Spot, Cedar Apple Rust, Cherry_Powdery Mildew, Cercospora zeaemaydis Tehon and Daniels, Puccinia polysora, Corn Curvularia Leaf Spot Fungus; Grape Black Rot Fungus, Grape Leaf Blight Fungus, Potato Early Blight, Strawberry Leaf Blight, Tomato Septoria Leaf Spot Fungus, Tomato Spider Mite Damage, etc., and all are more than 60 different levels of pest and disease data. The number of pictures reached more than 36,000. Some sample images of the data set are shown in Fig. 1:

The types of design, disease names and quantities in the multiple disease dataset are shown in Table 1.

Table 1 Detailed information of sample dataset

Convolutional neural network methods

Wide residual networks

Wide residual networks are proposed by Zagoruyko and Komodakis [16] for exploring a much richer set of network architectures of ResNet [15] blocks and thoroughly examine how several other different aspects affect its performance.

Traditional CNN architectures [19, 20] have a long time discussion on shallow or deep residual networks. Deep residual networks have been shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. Differing with deep residual networks, WRNs is a kind of novel architecture where decrease depth and increase width of residual networks. The WRN used in this paper for evaluation is based on Zagoruyko and Komodakis’s research [16] on the Resnet’s depth of residual network structure. Structure diagram of WRN is shown in Table 2.

Table 2 Structure diagram of WRN

According to the structure of WRN as shown in Fig. 2, WRN designed in this paper uses basic-wide as the basis residual unit. The size of the base convolution kernel is 3 × 3, including 4 groups of residual units. The group residual unit has 5 residual convolution modules; the pooling layer uses the average pooling layer. The static structure of WRN's tensor board is shown in Appendix A.

Fig. 2
figure2

Static structure of WRN

GoogleNet Inception V4

The Inception network [14] was an important milestone in the development of CNN classifiers. Prior to its inception (pun intended), most popular CNNs just stacked convolution layers deeper and deeper, hoping to get better performance.

GoogLeNet Inception [14] has experienced four major versions, of which the 1 × 1, 3 × 3, 5 × 5 conv and 3 × 3 pooling branch networks have been focused in the V1 version; the batch normalization has been substituted by dropout and LRN in V2 version; the V3 version introduces factorization, the larger two-dimensional convolution is split into two smaller one-dimensional convolutions, such as the 3 × 3 convolution which is spilt into 1 × 3 convolution and 3 × 1 convolution. For one thing, it saves a lot of parameters and accelerates the operation; for another, a non-linear is extended.

The GoogLeNet Incepetion V4 used in this article is based on the GoogLeNet Incepetion V3. After in-depth study of the inception module, it is built with residual connection. It not only greatly accelerates the training but also greatly improves the performance. It is widely used in the field of image recognition.

Experimental evaluation method

Design of the experimental protocols

We designed three sets of experimental protocols for evaluating the performance of WRN convolutional neural network and GoogLeNet Inception V4. The experimental design is as follows:

  1. 1)

    For the full dataset, the WRN and GoogLeNet Inception V4 models were used for training, and the evaluation and analysis of the training loss function curve and accuracy value were performed on the basis of the crossover quotient loss function.

  2. 2)

    For the common disease picture data of tomato, potato and corn, the WRN convolutional neural network was used to construct tomato, potato and corn models, and tested, evaluated and analyzed.

  3. 3)

    Selecting the disease simulation of potato and corn to construct an intercropping environment, using WRN convolutional neural network for training, and testing, evaluating and analyzing

Construction of convolutional neural network

Construction of basic environment

The model implementation of this paper mainly builds TensorFlow deep learning framework which is based on cuda_9.0, cudnn 6, tensorflow 1.4.1_gpu and other environments on Ubuntu 16.04 LTS operating system. The computing platform uses a single-chip model of Tesla K80 graphics processor (GPU). It is powered by an Intenl Core i7 4790 CPU with a 64 GB desktop computer.

Selection and construction of network model

This paper focuses on building WRN and GoogLeNet Inception V4 networks, and trained, tested, compared and analyzed.

Through the research of related literature and WRN model, the data of each iteration to the network is set in the model construction. After the image is input, according to the height and width of the standard resnet image, the height and width of the input image are reset as 224 dpi; after inputting the image, set the minimum learning rate to 0.0001, the initial learning rate is 0.1, the learning rate decline gradient is 0.0002, and using 3 × 3 convolution; the training uses the momentum gradient descent method, and the loss function uses the Softmax loss.

Evaluation indicators

The evaluation index is an important basis for judging the classification effect of the model. In this paper, the accuracy rate is used as the comprehensive evaluation index, and the precision and recall rate are also referred to. The following describes the evaluation indicators involved in this paper:

Evaluation terminology

True positives (TP): the number of positive cases that are correctly divided into positive examples, that is, the number of instances (samples) that are actually positive examples and are classified into positive examples by the classifier;

False positives (FP): The number of positive examples that are incorrectly divided into positive examples, that is, the number of instances that are actually negative but are classified as positive by the classifier;

False negatives (FN): the number of instances that are incorrectly divided into negative examples, that is, the number of instances that are actually positive but are classified as negative by the classifier;

True negatives (TN): The number of instances that are correctly divided into negative examples, that is, the number of instances that are actually negative and are classified as negative by the classifier.

Evaluation indicators

Accuracy

The correct rate is our most common indicator. Accuracy = (TP + TN)/(P + N) is the number of samples that are paired and divided by the number of samples. The higher correct rate leads to the better classifier.

Precision

Precision is a measure of accuracy, representing the proportion of a positive example that is divided into positive examples, precision = TP/(TP + FP);

Recall

The recall rate is a measure of coverage. There are multiple positive examples of metrics that are divided into positive cases, recall = TP/(TP + FN).

Experimental procedure

The experimental procedures mainly contain two parts: (1) construction of model and (2) testing the model.

In the first part, it includes the pre-processing of raw agriculture pest or disease images, and then construction of WRN model. After that, we have built up the baseline of image datasets and start up the training process. Finally, we will generate the curve of loss function of WRN and analyse the experimental results. Among these steps, data processing aims at classifying initial pest images dataset and determine the training and testing detests. The observation of loss function curves will determine the best performance of models in training.

In the second part of testing model, it aims at evaluating the WRN model over several different parameters setting, for analyzing the best performance of models. Through analysis the experimental results, we could re-design the experimental protocols for further studying the accuracy and robustness of WRN models in these dataset.

Experimental results and discussion

General disease identification performance of WRN and GoogLeNet Inception V4

For the 35 diseases of 10 varieties including the crops such as tomato, potato, grape, corn, apple and citrus in the data set, in the 36,000 image data sets, the WRN model was used for training, and when the loss function curve tends to converge, model training is ended; and compared with the GoogLeNet Inception V4 model, the loss function curves during the model training are shown in Fig. 3 a and b.

Fig. 3
figure3

Comparison of loss function curves under WRN and GoogleLeNeT Inception V4.

In the deep learning network process, proper iteration could help to avoid under-fitting and over-fitting problems. Therefore, training iterative parameters is one of the important parameters in the deep-learning network training process. Since the two models in this paper use the Softmax loss, the training loss function curves of the two models are compared and discussed as follows:

(1) For WRN model, with the increase of iteration, the training loss could approach convergence when it is about 6000 to 7000 steps ago. The value of the loss function is already around 0.2, which indicates that our model could be quickly learned the characteristics of pests at the beginning of the training phase. As the network continues to iterate, the decline in training losses becomes slower. During the iteration of 8000 to 12000, the model became convergent. Therefore, we chose 12000 as the best training iteration parameter in the experiment.

(2) When GoogLeNet Inception V4 has an iteration number of 12000, the value of the loss function is still around 0.6. It could be seen that GoogLeNet Inception V4 has a lower tested in the iterative 12000 time model respectively. In the case of the same data set and the test data are processed, the accuracy of the WRN model is 0.9103418433578537, and the accuracy of the GoogLeNet Inception V4 model is 0.572600126. It shows that the WRN model introduced in this paper has a better effect on the identification of agricultural diseases.

The results on testing WRN Model shows on Table 3 through the analysis of Table 3, the conclusions are as the followings:

  1. (1).

    As the WRN network is a better image recognition in public data sets, and it has a good effect in agricultural disease identification, and could provide a new deep learning network for agricultural pest identification;

  2. (2).

    From the test results, it is found that the higher the sample of the single disease dataset, the higher the evaluation result of the model, which indicate that the model is more suitable for application in the agricultural big data environment.

Table 3 Experimental results on testing WRN model

Specific disease identification performance of WRN and GoogLeNet Inception V4

Using the WRN model, the number of data samples was more than 1000, and the crops with better recognition effect, such as tomato, potato and corn, were selected as the main research objects, to construct the professional identification models for its important diseases.

Using the construction of the WRN model, the training sample data of important diseases of tomato, potato and corn were input respectively. When the loss function curve became stable, the model training was ended. The loss function curves in the model training process are shown in Fig. 4 a (tomato), b (potato) and c (corn). When the model training is finished, the test data set reserved in advance is input into the model, and the recognition result is output through the test program. Among them, the Accuracy value of tomato is 96%; The Accuracy value of the potato is 98%; the Accuracy value of corn is 94%. In addition, the comprehensive evaluation of Precision, Recall and F1-score is shown in Table 4, Table 5 and Table 6.

Fig. 4
figure4

Loss function curves of data of tomato, potato and corn

Table 4 Recognition results of tomato
Table 5 Recognition results of potato
Table 6 Recognition results of corn

Disease identification performance under simulated environment on intercropping corn and potato

In the process of agricultural production in Yunnan, the intercropping method is used to verify the effect of WRN model in agricultural production. In this paper, the important diseases of common crops such as corn and potato are used as identification objects, and the intercropping environment is simulated. The recognition effect of the model is shown below.

Using the WRN model, the four common diseases of corn and the two common diseases of potato are taken as data samples. As shown in Fig. 5, through training the model, when the loss function curve tends to be stable, the model training is ended. It also represents the curve of loss function in the training process in the Fig. 5. On this basis, the recognition program is used to input the training set data to the model for identification, and the identified accuracy value is 96%. The comprehensive evaluation of precision, recall and F1-score is shown in Table 7: It can be seen from the analysis that the identified accuracy value is about 97%, indicating that the agricultural disease identification model constructed by the WRN network used in this paper has a good effect on the disease identification between the simulated corn and potato intercropping crops. It has an important reference value on agricultural disease identification in Yunnan Province.

Fig. 5
figure5

Loss function curves of simulation model construction process on intercropping corn and potato

Table 7 Recognition results of intercropping corn and potato

By analyzing the accuracy value, the comprehensive evaluation of precision, recall and F1-score are as follows:

(1). Through the training loss analysis of WRN model and GoogLeNet Inception V4, it was found that with the increase of iteration, the training loss of WRN model can approach convergence when it was about 6000 to 7000 steps; and the number of iterations of GoogLeNet Inception V4 at 12000 At the same time, the value of the loss function was still around 0.6. It can be seen that the GoogLeNet Inception V4 had a lower learning rate for pests and diseases than the WRN model. The model was tested at 12000 iterations. The accuracy of the WRN model was 0.91. The accuracy of the GoogLeNet Inception V4 model was 0.57. It can be seen that the WRN model introduced in this paper has a better effect on the identification of agricultural diseases.

(2). Through experiments on more than 36,000 pieces of 35 different diseases, it shows that WRN has certain requirements on the number of agricultural disease picture samples. The total sample size is greater than 800 and showed better recognition effect;

(3). Considering that the images were not optimized during the training process, the original images were used. According to the identification results of the grape, corn, potato and corn and potato intercropping scenarios, the WRN model can be applied to the identification of agricultural diseases.

Conclusion and future work

This paper proposed a novel end-to-end approach for automatic multiclass insect detection. It consists of three major parts. Firstly, we utilize deep convolutional backbones for automatic feature extraction. In this paper, more than 36,000 pictures of diseases involving tomato, potato, grape, corn, apple, etc. are studied. Through literature retrieval and comparative analysis, Tesla K80’s graphics processor (GPU) is used to deeply study in TensorFlow. The WRN convolutional neural network and GoogLeNet inception V4 network were trained, tested, evaluated and compared in the framework, and the agricultural disease identification based on WRN convolutional neural network was studied.

Through the design of the experiment, the accuracy was used to evaluate the model, and three sets of experiments are designed, which use WRN and GoogLeNet Inception V4 model to train on the full data set, and evaluate and analyze the training loss function curve and accuracy value. The WRN convolutional neural network was used to construct tomato, potato and corn models, and tested, evaluated and analyzed. The disease simulation of potato and corn was selected to construct the intercropping environment, and the WRN convolutional neural network was used for training, evaluating and analyzing.

Abbreviations

CV:

Computer vision

WRN:

Wide residual networks

PCA:

Principle component analysis

CNN:

Convolutional neural networks

References

  1. 1.

    W. Ding, G. Taylor, Automatic moth detection from trap images for pest management. Computers and Electronics in Agriculture 123, 17–28 (2016)

  2. 2.

    S. Ren, K. He, R. Girshick and J. Sun, Faster r-cnn: towards real-time object detection with region proposal networks, Advances in neural information processing systems, pp. 91-99, 2015.

  3. 3.

    J. Dai, Y. Li, K. He and J. Sun, R-fcn: object detection via region-based fully convolutional networks, Advances in neural information processing systems, pp. 379-387, 2016.

  4. 4.

    A. Shrivastava, A. Gupta and R. Girshick, Training region-based object detectors with online hard example mining, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 761-769, 2016.

  5. 5.

    I. Y. Zayas and P. W. Flinn, Detection of insects in bulkwheat samples with machine vision, Transactions of the ASAE, vol. 40, no. 3, pp. 883, 1998.

  6. 6.

    P. J. D. Weeks, M. A. O'Neill, K. J. Gaston and I. D. Gauld, 1999. Species–identification of wasps using principal component associative memories, Image and Vision Computing, vol. 17, no. 12, pp. 861-866, 1999.

  7. 7.

    OpenCV 2.4: https://docs.opencv.org/3.0-beta/modules/refman.html [Accessed by Dec 2018]

  8. 8.

    J. Wang, C. Lin, L. Ji, A. Liang, A new automatic identification system of insect images at the order level. Knowledge-Based Systems 33, 102–110 (2012)

  9. 9.

    N. Larios, B. Soran, L. G. Shapiro, G. Martinez-Munoz, J. Lin and T. G. Dietterich, Haar random forest features and SVM spatial matching kernel for stonefly species identification, Pattern Recognition (ICPR), 2010 20th International Conference on, pp. 2624-2627, Aug, 2010.

  10. 10.

    C. Wen, D.E. Guyer, W. Li, Local feature-based identification and classification for orchard, insects. Biosystems engineering 104(3), 299–307 (2009)

  11. 11.

    L.O. Solis-Sánchez, R. Castañeda-Miranda, J.J. García-Escalante, I. Torres-Pacheco, R.G. Guevara-González, C.L. Castañeda-Miranda, P.D. Alaniz-Lumbreras, Scale invariant feature approach for insect monitoring. Computers and electronics in agriculture 75(1), 92–99 (2011)

  12. 12.

    Z. Liu, Z. Wu, T. Li, J. Li and C. Shen, Gmm and cnn hybrid method for short utterance speaker recognition, IEEE Transactions on Industrial Informatics, Mar, 2018.

  13. 13.

    R. Girshick, J. Donahue, T. Darrell and J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580-587, 2014.

  14. 14.

    Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions, IEEE Computer Society. IEEE onference on Computer Vision and Pattern Recognition. 2014: 1-9.

  15. 15.

    He K, Zhang X, Ren S, et al. Deep residual learning for image recognition [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778.

  16. 16.

    Sergey Zagoruyko, Nikos Komodakis. Wide residual networks. arXiv:1605.07146.Jun, 2017.

  17. 17.

    A. Krizhevsky, I. Sutskever, G. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing 2012, pp.1097-1105.

  18. 18.

    K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint, arXiv:1409.1556, 2014.

  19. 19.

    K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognition, Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.

  20. 20.

    T. Fukuda and T. Shibata, Theory and applications of neural networks for industrial control systems, IEEE Transactions on industrial electronics, vol. 39, no. 6, pp.472-489, Dec, 1992.

Download references

Data sharing

Not applicable to this article due to the restrictions of intellectual property. But it is possible to request partial data for academic research purpose by contacting authors.

Funding

The research was supported by grants from the National Natural Science Foundation of China (Grant No: 11165016) and Key Projects of The National Natural Science Foundation of China (Grant No: 11731101)

Author information

HY is responsible to carry out the key design and implementation of WRN model, and initialized the drafted manuscript. LG provides the annotated dataset and prepare the experimental protocol for evaluating the WRN model, also contributed the experimental evaluation sections in the manuscript. NT has supervised the improvement and optimization of WRN model from mathematical perspective. PY takes main responsibilities to analysis and discussion of experimental results, and guides the production of the final version of manuscript. All authors read and approved the final manuscript.

Correspondence to Niansheng Tang or Po Yang.

Ethics declarations

Competing interests

We declare that all authors have no significant competing financial, professional or personal interests that might have influenced the performance or presentation of the work described in this manuscript.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Fig. 6
figure6

Static structure of WRN’s Tensor Board

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Yang, H., Gao, L., Tang, N. et al. Experimental analysis and evaluation of wide residual networks based agricultural disease identification in smart agriculture system. J Wireless Com Network 2019, 292 (2019). https://doi.org/10.1186/s13638-019-1613-z

Download citation

Keywords

  • Disease identification
  • Convolutional neural network
  • Wide residual networks
  • Position-Sensitive Score Map