A Review of Various Convolutional Neural Network Model for Crop Pest Recognition and Classification

DOI : 10.17577/IJERTV10IS110144

Download Full-Text PDF Cite this Publication

Text Only Version

A Review of Various Convolutional Neural Network Model for Crop Pest Recognition and Classification

Dr. H. B. Bhadka

Department of Computer Science,

C. U. shah University Wadhwan, Gujarat

Vishakha B. Sanghavi

Department of Computer Science

  1. U. shah University Wadhwan, Gujarat.

    Vijay J. Dubey

    Assistant Professor,

    1. K. University Rajkot, Gujarat

      AbstractA severe intimidation to crop development and its storage is affected by agricultural pests. Insects are mostly affecting the crop yield productivity and it cause damage to crops. Due to the complex structure, insects classification is a main difficult task and they have a high quantity of similarity of the appearance among different species. It becomes essential to identify and classify insects in the crops at a prior stage, particularly to avoid the spread of insects, which cause crop disease by choosing efficient insecticides and organic control approaches. A traditional manual classification of insects is generally time-consuming, labor-intensive and ineffective due to the manual selection of the valuable feature sets. In the past few years, Machine Learning approaches have been used for classification and recognition of insects to solve these issues in the agricultural area. In this paper, we aim to evaluate and present an overview of different Machine Learning methods for crop pest recognition and classification like Genetic algorithm based neural network, Deep Convolution neural network and Transfer learning based framework.

      KeywordsCrop Pest classification and recognition, Genetic Algorithm, Deep Convolution Neural Network, Transfer learning based framework.

      1. INTRODUCTION

        Crop pests cause substantial losses to crops in the world, whether in urban or emergent countries [1]. According to current investigation, in the world, nearly half of the crop yield is lost to pest invasions and crop diseases. Accordingly, meticulous pest control is a significant task to enhance crop yields and reduce losses [2] [3]. When pests infect a field, they must be recognized in time; therefore farmers can deliver timely treatment and avoid the spread of pests. Still, a traditional pest recognition schemes have numerous limitations [4]. Most of the generally used techniques are manual study. Here the farmers or experts manually check the field monthly, daily, and weekly for pest or diseases [5].

        Different kinds of insects are available in the world and more individuals that belong to the identical species are huge. Hence, the traditional pest recognitions schemes are time-

        consuming, difficult and error-prone [6].

        Production in agriculture is enhanced to resolve this problem and it is one of the developing research areas. Fighting pests is most significant to enhance productivity in agriculture. Insects are mostly affected the crop regions such as wheat, rice, and beans [7], and these are the most essential components that are affecting the loss in the crop production.

        Pesticides [8] and other organic control approaches should be employed to regulate the insect inhabitants and avoid their spread to bulky regions. In addition, it excludes injury to crop regions by insects [9]. Finding the type of insect is enormously significant as pest control schemes vary based on the insects types [10]. But due to the similarities among various insect species and the multifaceted arrangements of insects, the classification of insect is a major challenging task [11]. Entomologists have manually executed the insects classification, which is time-consuming and difficult. So it needs a deep knowledge of the field. To resolve these issues, numerous computer-aided classification approaches have been used by the experts [12] [13].

        A classical Machine Learning (ML) schemes are employed in the computer-aided image classification schemes and these are regularly applied in different tasks like insect classification and identification [14]. But, a classical ML approaches have some drawbacks. They need an extra phase of data pre-processing as feature engineering and it is very critical [15]. In addition, their ability is low to classify with various dataset. Besides, their performance extremely depends on the data in hand, namely, small dataset outcomes in poor accuracy. The improvement in the dataset does not make the performance better after reaching particular accuracy [16]. The same problems are faced in the insect classification.

        Recently, deep learning (DL) models, particularly Convolutional Neural Network (CNN) have solved these issues when the dataset is composed of images [17]. In agriculture field, processes like plant recognition, weeds

        recognitions and plant disease classification, DL based methods have been used regularly. One of the special types of ML is a DL that employs multilevel neural networks and it permits frameworks to learn and mine deep abstract features automatically [18]. Nowadays, various DL models have been used for pests classification and it accomplished state-of-the- art outcomes in several pest recognition applications [19].

        Insect classification is the most significant research field. Here, CNNs performed well than other classical ML approaches [20]. By investigating the prior work, DL models have been verified to enhance the performance of pests classification considerably. Hence, there is a need to develop the intelligent expert system that can successfully and automatically recognize crop pests images.

        The survey is organized as follows: Section II to IV briefly review and summarize schemes like deep Convolutional Neural Network, Genetic Algorithm and Transfer learning based framework for crop pest identification and classification. Section V provides conclusion for this paper.

      2. CROP PEST CLASSIFICATION USING GENETIC ALGORITHM

        models for better understanding the performance of the models in feature extraction. Python programming language and deep learning framework was used for implementation. They used three dataset for their work. The first is D0 insect dataset which is publicly available. Two more datasets were used to strengthen the ensemble models. One was a SMALL dataset [24] and the other was a large dataset IP102 [25].

        Fig. 1 represents accuracy, average precision, average recall, and average f1-score for all the models considered in the study using D0 dataset. When it is examined, Inception- V3, Xception, and MobileNet are the most successful models among the fine-tuned CNN models and achieved the classification accuracy rate of 97.06%, 97.93%, and 97.39%, respectively. While, the SMPEnsemble method achieved an accuracy of 98.37% and the proposed GAEnsemble method achieved an accuracy of 98.81% for the test dataset.

        1. Introduction

          A genetic algorithm (GA) based weighted ensemble of deep CNNs has been presented by Enes Ayan et al. [21] for crop pest classification. The proposed method includes the various simplification abilities of CNN designs to identify the insect species. Using the fine-tuning methods and proper transfer learning schemes, various kinds of CNN models such as ResNet-50, Xception, SqueezeNet, VGG-19, MobileNet, Inception-V3 and VGG-16 models were trained. Besides, the test performances of the structures were assessed and the best executed three structures like Xception, Inception-V3 and MobileNet were chosen for the developed scheme. By the validation dataset with GA, model weights were computed in the developed structure.

        2. Methodology

          Researchers had proposed four- stage classification method to classify the insect. First stage ontains adoption of seven pre-trained CNN models which are preserved using different transfer learning methods. Then they selected three models among them with the highest accuracy to build the ensemble model in second stage. In third stage, ensemble weights of all three models were established using genetic algorithm and dataset. In Last stage, they obtained the prediction using voting methods which combine all models outputs with their respective weights.

        3. Experimental Setup

          For experiments, researchers used different combinations of hyper-parameters such as the number of frozen convolutional layers, the number of fully connected layers, the dropout rate, optimization algorithm, learning rate, and epochs. After training, successful models were saved for testing purpose whose hyper-parameters were high. The number of fully connected layers was kept the same in all

          Figure 1 Experimental classification results on test set [21]

        4. Results

        Review of this paper shows that GAEnsemble method achieved 98.81, 95.15, and 67.13% classification accuracy for D0, SMALL, and IP102 datasets, respectively. Thus the proposed methods accuracy rates are higher than other methods considering all three dataset D0, SMALL and IP102. In addition, the proposed method outperforms the other studies using the three datasets.

      3. CROP PEST CLASSIFICATION USING DEEP CONVOLUTIONAL NEURAL NETWORK

        1. Introduction

          CNN structures for crop pest identification have been presented byYanfen Li et al. [22] in natural scenes. They use various deep CNN Models to identify ten general spices of crop pest. In this system, a manually gathered image dataset is used and it validated for the identification. GoogleNet is used to decipher difficult background by woodland and natural scenes for fine-tuning process. The results are better for classification than previous model. The developed system obtained higher performances than other approaches.

        2. Methodology

          Researchers had started with dataset preparation which was done by downloading the images from different web browsers. After data collection, they opted for data augmentation method to create more data. Researchers have chosen offline augmentation method over online augmentation method as the previous one works better with small dataset. Authors have created 14475 images from 5629 by applying translation, flipping, rotation, noise addition and other data augmentation methods.

          Image processing is applied before generating new augmented dataset to highlight the target object from multifaceted background which affects the overall accuracy and speed of training. They used two image preprocessing techniques to remove the complex background: (1) Mixed image processing technique (for the image which have visible difference between target and background)- In this method, the original image is converted into a binary image using adaptive threshold technique. Then erosion and dilation is applied to the binary image to remove the noise from it and connect the disconnected pixel of the pest. At last, they used watershed algorithm and contour detection algorithm to confine the target and outline the target. (2) GrabCut Algorithm (for the image which have similar foreground and background) This method determines the pest by drawing the rectangle on it and all other pixels are considered as background. They used Gaussian Mixture Model to simulate the foreground and background.

        3. Experimental Setup

          In this study, five different CNN models were investigated, including VGGNet (VGG-16 and VGG-19), ResNet (ResNet50 and ResNet152) andGoogleNet (Inception-V3). These networks have achieved considerably good performance in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [26].

          The following parts give a brief introduction for the five CNN models.

          • VGGNet

            This machine learning technique is built with three fully- connected layes and five convolution layers. VGGNet has a small pooling size, and a wider feature map, duw to which the architecture becomes deeper, wider, and at the same time reduce the computational time. In this network, they used VGG-16 and VGG-19, which contain 16 and 19 convolutional layers, respectively.

          • ResNet50 and ResNet150

            This machine learning technique was used to solve the degradation problem and the vanishing gradient problem. It uses network structure that is eight times larger than VGGNet, but it is simpler than VGGNet. Researchers used Resnet50 and ResNet152 which contain five blocks of convolution layers with the input size of 224 × 224. Moreover, each block contains three convolution layers, so the ResNet152 model has 102 more convolution layers than the ResNet50 model.

          • GoogLeNet

            This machine learning technique is a new structure of deep learning proposed in 2014. In this paper, Inception-V3 is used as the implementation of GoogLeNet model which uses factorization. The convolution of 7 × 7 is decomposed into two one-dimensional convolutions (1 × 7, 7 × 1), and the

            convolution of 3 × 3 is decomposed into (1 × 3, 3 × 1) to increase the network depth. Inception-V3 consists of 5 convolution layers, 3 inception modules in block1, 5 inception modules in block 2, and 2 inception modules in block 3[27]. In addition, the default input size of the data is 299 × 299.

            Result of feature extraction shows that classification accuracy is 5.9 % higher in simple background over complex background. After performing the GoogleNet training model, the result suggests that the model correctly identifies ten species with an average accuracy of 98.91%.

            The Table I shows that the error rate of class 3 (leafhopper) is the highest at 2.84% because the model misclassified it as locus, oriental fruit fly, and snail as they have similar shape and color with the background environment. Furthermore, the model achieves 100% classification accuracy on three species of pest (Cydia pomonella, Gryllotalpa, Pieris rapae Linnaeus) while the other six species of pests have the accuracy between 98.26% and 99.33%.

            Table1 Confusion Matrix for crop pest classification using 10 class. [22]

            Class

            1

            2

            3

            4

            5

            6

            7

            8

            9

            10

            Accuracy

            1

            100

            0

            0

            0

            0

            0

            0

            0

            0

            0

            100

            2

            0

            110

            0

            0

            0

            0

            0

            0

            0

            0

            100

            3

            0

            0

            101

            0

            0

            0

            0

            0

            0

            0

            97.16

            4

            0

            0

            0

            113

            0

            0

            0

            0

            0

            0

            98.26

            5

            0

            0

            0

            0

            117

            0

            0

            0

            0

            0

            98.34

            6

            0

            0

            0

            0

            0

            136

            0

            0

            0

            0

            100

            7

            0

            0

            0

            0

            0

            0

            118

            0

            0

            0

            99.33

            8

            0

            0

            0

            0

            0

            0

            0

            101

            0

            0

            99.02

            9

            0

            0

            0

            0

            0

            0

            0

            0

            126

            0

            98.43

            10

            0

            0

            0

            0

            0

            0

            0

            0

            0

            132

            98.51

            Average Accuracy

            98.91

        4. Results

        In this research, the experiment results proved that the GoogLeNet model achieved the classification accuracy of 6.22% higher than the ResNet101 model. It shows that the GoogLeNet model was effective and robust for the identification of crop pests, and can significantly reduce processing times if it is integrated into the practical applications.

      4. CROP PEST CLASSIFICATION USING TRANSFER LEARNING BASED FRAMEWORK

        1. Introduction

          A transfer learning based structure has been offered by Gayatri Pattnaik et al. [23] for pest identification of tomato plants. The proposed method makes use of a transfer learning based pre-trained deep CNN structure for pest identification

          of tomato plants. They gathered dataset from online sources, for the proposed system. This dataset contains total 859 images classified into 10 groups. The dataset with 10 groups of tomato pest is used and the performance of 15 pre-trained deep CNN model has been evaluated for the classification pest in tomato plants.

        2. Methodology

          Dataset used in this study has been downloaded from various online platform like Flickr 2018[28], IMP images 2018[29], Insect Images 2018[30], etc. The dataset consists of 859 RGB color tomato pest images belonging to 10 classes.

          Deep learning model like CNN has been successful for classifying images in crop pest recognition which requires a large amount of dataset and high computational power. For agriculture domain, collection of large dataset is generally difficult due to local weather, indiscrimination of insects and abandoned environment of persistent pests. So researcher used transfer learning model in which knowledge gained from the training of large dataset is used for classification. They have used 15 pre-trained deep CNN models for this research. All the above models are trained on ImageNet dataset [31] which has 1.2 million images belonging to 1,000 categories. Each model has some unique characteristics. Table II shows the details of all 15 pre-trained CNN model.

          Table 2 Details of 15 pre-trained model. [23]

        3. Experimental Setup

          In this study, researcher has reshaped the tomato pest images to the desired shape as per model requirement. For example, Tomato pest images have been reshaped to 224 × 224×3 for VGG16 model and it is reshaped to 299×299×3 for Inception model. Then they replaced the last fully connected layer of 1000 neurons to the fully connected layer of 10 neurons as the tomato pest dataset used in this study had 10 classes only and all the layers were frozen except last layer to implement the concept of transfer learning .Due to this, number of trainable parameters were significantly abridged. Then the researchers had divides tomato pest dataset into 70% training set, 20% validation set and 10% test set. They trained each model with 100 epoch and batch size of 8 and learning rate of 0.01 for five trials. All experiments were performed with python and Keras framework having TensorFlow backend.

        4. Results

        Authors have calculated the following parameters: Class- wise Accuracy, Precision, Sensitivity, Specificity, and F1 Score using DenseNet169 model and experiment shows that highest overall accuracy with 88.83% and with standard deviation of 1.48% was obtained through DenseNet169 model.

      5. CONCLUSION

According to the review based on the above three methods, GAEnsemble method achieved 98.81, 95.15, and 67.13% classification accuracy for D0, SMALL, and IP102 datasets, respectively. The accuracy rates on the three datasets achieved by the proposed method are higher than the accuracy rates obtained by other CNN models. In the next method of deep convolution neural network, researchers used manually collected images which contain complex background and they proposed fine-tuned GoogleNet model which outperformed other models in terms of accuracy, model complexity and robustness despite of complex background images. Third paper suggests the transfer- learning based framework which can be used when you have not large dataset and high computation power. They proposed DenseNet169 model that achieved 88.83% accuracy which is better than other 15 models used in their research work.

REFERENCES

  1. Cheng, Xi, Youhua Zhang, Yiqiong Chen, Yunzhi Wu, and Yi Yue. "Pest identification via deep residual learning in complex background." Computers and Electronics in Agriculture 141, pp. 351-356, 2017.

  2. Liu, Ziyi, JunfengGao, Guoguo Yang, Huan Zhang, and Yong He. "Localization and classification of paddy field pests using a saliency map and deep convolutional neural network." Scientific reports 6, No. 1, pp. 1-12, 2016.

  3. Xie, C., Zhang, J., Li, R., Li, J., Hong, P., Xia, J., and Chen, P., Automatic classification for field crop insects via multiple-task sparse representation and multiple-kernel learning, Comput. Electron. Agric. 119, pp.123132, 2015.

  4. Xie, C., Wang, R., Zhang, J., Chen, P., Dong, W., Li, R., Chen, T., and Chen, H., Multilevel learning features for automatic classification of field crop pests, Comput. Electron. Agric. 152, pp. 233241, 2018.

  5. Xia, D., Chen, P., Wang, B., Zhang, J., and Xie, C., Insect detection and classification based on an improved convolutional neural network, Sensors 18, Vol. 12, pp. 4169, 2018.

  6. Wang, R., Zhang, J., Dong, W., Yu, J., Xie, C.J., Li, R., Chen, T., and Chen, H., A crop pests image classification algorithm based on deep convolutional neural network, Telkomnika 15, Vol. 3, 2017.

  7. Liu, Jun, and Xuewei Wang. "Tomato diseases and pests detection based on improved Yolo V3 convolutional neural network." Frontiers in plant science 11, Vol. 898, 2020.

  8. Wang, Jin, Yane Li, Hailin Feng, LijinRen, Xiaochen Du, and Jian Wu. "Common pests image recognition based on deep convolutional neural network." Computers and Electronics in Agriculture 179, 105834, 2020.

  9. Kirkeby, Carsten, KlasRydhmer, Samantha M. Cook, Alfred Strand, Martin T. Torrance, Jennifer L. Swain, and JordPrangsma et al. "Advances in automatic identification of flying insects using optical sensors and machine learning." Scientific reports 11, No. 1, pp. 1-8, 2021.

  10. Jiao, Lin, Shifeng Dong, Shengyu Zhang, Chengjun Xie, and Hongqiang Wang. "AF-RCNN: An anchor-free convolutional neural network for multi-categories agricultural pest detection." Computers and Electronics in Agricultre 174 105522, 2021.

  11. Xing, Shuli, Marely Lee, and Keun-kwang Lee. "Citrus pests and diseases recognition model using weakly dense connected convolution network." Sensors 19, No. 14, pp.3195, 2019.

  12. Rustia, Dan JericArcega, JunJee Chao, LinYa Chiu, YaFang Wu, JuiYung Chung, JuChun Hsu, and TaTe Lin. "Automatic greenhouse insect pest detection and recognition based on a cascaded deep learning classification method." Journal of Applied Entomology 145, No. 3, pp.206-222, 2021.

  13. Li, Weilu, Peng Chen, Bing Wang, and Chengjun Xie. "Automatic localization and count of agricultural crop pests based on an improved deep learning pipeline." Scientific reports 9, No. 1, pp. 1-11, 2019.

  14. Alves, AdãoNunes, Witenberg SR Souza, and Díbio Leandro Borges. "Cotton pests classification in field-based images using deep residual networks." Computers and Electronics in Agriculture 174, pp.105488, 2019.

  15. Alfarisy, Ahmad Arib, Quan Chen, and MinyiGuo. "Deep learning based classification for paddy pests & diseases recognition." In Proceedings of 2018 International Conference on Mathematics and Artificial Intelligence, pp. 21-25. 2018.

  16. Liu, Liu, Rujing Wang, Chengjun Xie, Po Yang, Fangyuan Wang, SudSudirman, and Wancai Liu. "PestNet: An end-to-end deep learning approach for large-scale multi-class pest detection and classification." IEEE Access 7, pp. 45301-45312, 2019.

  17. Tetila, Everton Castelão, Bruno Brandoli Machado, Gilberto Astolfi, Nícolas Alessandro de Souza Belete, WillianParaguassuAmorim, Antonia RaildaRoel, and HemersonPistori. "Detection and classification of soybean pests using deep learning with UAV images." Computers and Electronics in Agriculture 179, pp. 105836, 2019.

  18. Chandy, and Abraham. "Pest infestation identification in coconut trees using deep learning." Journal of Artificial Intelligence 1, No. 01, pp.10- 18, 2019.

  19. Sun, Yu, Xuanxin Liu, Mingshuai Yuan, LiliRen, Jianxin Wang, and Zhibo Chen. "Automatic in-trap pest detection using deep learning for pheromone-based Dendroctonusvalens monitoring." Biosystems engineering 176, pp. 140-150, 2018.

  20. Panchbhaiyye, Vineet, and TokunboOgunfunmi. "Experimental results on using deep learning to identify agricultural pests." In 2018 IEEE Global Humanitarian Technology Conference (GHTC), pp. 1-2. IEEE, 2018.

  21. Ayan, Enes, HasanErbay, and FatihVarçn. "Crop pest classification with a genetic algorithm-based weighted ensemble of deep convolutional neural networks." Computers and Electronics in Agriculture 179, pp. 105809, 2020.

  22. Li, Yanfen, Hanxiang Wang, L. Minh Dang, AbolghasemSadeghi- Niaraki, and Hyeonjoon Moon. "Crop pest recognition in natural scenes using convolutional neural networks." Computers and Electronics in Agriculture 169, pp. 105174, 2020.

  23. Pattnaik, Gayatri, Vimal K. Shrivastava, and K. Parvathi. "Transfer learning-based framework for classification of pest in tomato plants." Applied Artificial Intelligence 34, no. 13, pp. 981-993, 2020.

  24. Deng, L., Wang, Y., Han, Z., and Yu, R., Research on insect pest image detection and recognition based on bio-inspired methods, Biosyst. Eng. 169, pp. 139148, 2018.

  25. Wu, X., Zhan, C., Lai, Y.-K., Cheng, M.-M., and Yang, J., Ip102: A large-scale benchmark dataset for insect pest recognition, In: Proceedings of the IEEE on Computer Vision and Pattern Recognition, pp. 87878796, 2019.

  26. Simonyan, K. and, Zisserman, A., Very deep convolutional networks for large-scale image recognition" arXiv preprint, pp. 1409.1556, 2014.

  27. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z., Rethinking the inception architecture for computer vision, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 28182826, 2018.

  28. Flickr. 2018. The online photo management and sharing application in the world. Photos of pests available online. Accessed December 10, 2018. https://www.flickr.com/ search/?text=helicoverpa%20armigera.

  29. Insect Images. 2018. The entomology society of America and USDA Identification Technology Program, Last updated in 2018. Photos of pests available online. Accessed December 22

    2018.https://www.insectimages.org/search/action.cfm?q=spodoptera+li tura

  30. IPM Images. 2018. The center for invasive species and ecosystem health, last updated in 2018. Photos of pests available online. Accessed December 18, 2018. https://www.ipmimages.org/ browse/Areathumb.cfm?area=63.

  31. Krizhevsky, A., I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, In Advances in Neural Information Processing Systems, No. 25 pp.10971105, 2012.

Leave a Reply