🌏
International Scholarly Publisher
Serving Researchers Since 2012

Deep Learning-Based Classification of Sorghum Pests for Early Detection

DOI : https://doi.org/10.5281/zenodo.18787569
Download Full-Text PDF Cite this Publication

Text Only Version

 

Deep Learning-Based Classification of Sorghum Pests for Early Detection

Ousmane Khouma

Polytech Diamniadio Université Amadou Mahtar MBOW, Dakar, Senegal

Madior Gueye

Ecole Supérieure Polytechnique Université Cheikh Anta DIOP Dakar, Senegal

Fatoumata Binetou Drame

Polytech Diamniadio Université Amadou Mahtar MBOW, Dakar, Senegal

Yassine Mouhamed Bachir Ndiaye

Polytech Diamniadio Université Amadou Mahtar MBOW, Dakar, Senegal

AbstractCereals and legumes are staple foods across many African countries. Despite their nutritional and economic importance, these crops are frequently attacked by pests whose infestations can partially or completely destroy the plants. This project aims to develop an automated recognition system capable of identifying pests and diagnosing the damage they cause, with a particular focus on sorghum crops in West and Central Africa. The system is based on image classification techniques and machine learning models designed to detect damage patterns associated with specific pest species. By analyzing visual indicators characteristic of each pest, the model attempts to infer the responsible species from the observed symptoms. Our results show high accuracy and IoU scores, highlighting the feasibility of AI-driven decision-support tools for smallholder farmers. This work contributes to sustainable agriculture by enabling early diagnosis and targeted interventions.

KeywordsSorghum; Pest recognition; Crop damage; Machine learning; Agriculture, Image classification

  1. INTRODUCTION

    Sorghum is one of the most widely cultivated crops in Sub- Saharan Africa, serving as a staple food source for millions of people. It is drought-resistant, highly adaptable to diverse soil types, and plays a central role in regional food security. Beyond human consumption, sorghum is also used for animal feed and brewing, making it a crop of major agronomic and economic importance. In west and central Africa, sorghum accounts for more around than 20% of the total cereal cultivation area [1].

    However, sorghum production faces multiple threats, particularly from pests and diseases that significantly reduce yield, grain quality, and overall productivity. Some of the most damaging pests include aphids, stem borers, and shoot flies, which not only cause direct harm to the plants but also create entry points for secondary pathogens. Their proliferation is often intensified by climate variability, monocropping practices, and limited access to plant protection solutions within smallholder farming systems [2].

    Traditional detection and monitoring of biotic stresses rely on manual inspection by farmers or extension workers, a process that is slow, subjective, and prone to human error. The

    shortage of plant health specialists in rural areas further reduces the effectiveness of this approach. Recent advances in computer vision and deep learning offer a promising alternative: Convolutional Neural Networks (CNNs) have shown strong performance in visual recognition tasks, including plant disease classification and pest detection. Combined with semantic segmentation, they can precisely locate damaged areas, providing more accurate, interpretable, and actionable diagnostics [3].

    This paper presents a deep learningbased framework that integrates both classification and segmentation to recognize major sorghum diseases and detect pest-induced damage, with a focus on aphid infestations. Our system is trained and validated on real field data and is designed to support smallholder farmers in making timely and informed management decisions. By offering an accessible and cost- effective diagnostic tool, we aim to contribute to improved crop management, reduced yield losses, and strengthened food resilience across West and Central Africa.

    The structure of this paper is organized as follows:

    Section II provides an overview of the main sorghum pests and examines their detrimental effects on crop productivity. Section III reviews the existing literature on peanut pest research. Section IV describes the proposed methodology, including the dataset construction, annotation process, preprocessing steps, model architecture, training strategy, validation and testing protocols, as well as the performance metrics used. Section V presents an exploratory analysis of the dataset through visualizations and descriptive statistics. Section

    VI details the model development and training procedures, while Section VII reports the experimental results and discusses model performance on both healthy and pest-affected sorghum images. Finally, Section VIII concludes the paper and highlights potential directions for future research.

  2. EASE OF USE THE DETRIMENTAL IMPACT OF PESTS ON SORGHUM CULTIVATION
    1. Impact on Crop Performance

      Pests such as aphids, stem borers, and midge flies feed on critical parts of the sorghum plant, causing chlorosis, reduced

      photosynthesis, and stunted growth. Aphids, in particular, excrete honeydew that fosters the growth of sooty mold, further impairing the plants ability to function properly. Continuous infestation leads to grain loss and structural damage, weakening plant resilience to drought and nutrient stress.

      Quantitatively, research has shown that:

      • Severe infestations by pests like Melanaphis sacchari (sorghum aphid) and Chilo partellus (stem borer) can result in up to 80% yield loss [4].
      • Aphid infestations may cause up to 50% reduction in photosynthetic efficiency due to chlorophyll degradation [3].
      • Pests reduce grain quality through mold development and make the crop commercially less valuable [3].
    2. Economic and Social Impact

    In Sub-Saharan Africa, agriculture accounts for over 60% of employment and contributes nearly 23% to the regions GDP [5]. Sorghum, as one of the top five staple crops, plays a crucial role in food security, especially in arid and semi-arid zones.

    Pest-related crop losses can reduce household income by up to 50%, particularly for smallholder farmers who depend heavily on seasonal harvests for subsistence and market sales [6]. In heavily infested regions, yield losses due to pests like Melanaphis sacchari (sorghum aphid) and Chilo partellus (stem borer) can reach 80%, leading to food shortages and increasing dependency on food imports [7].

    Moreover, farmers often resort to broad-spectrum pesticides, which represent a financial burden: in some cases, pest control costs can consume up to 30% of a farmers annual income [8]. This economic pressure is exacerbated by the limited access to early warning systems and precision agriculture tools in rural areas, which hinders timely interventions and sustainable practices.

    The lack of scalable, affordable pest detection technologies contributes to a widening productivity gap between smallholders and industrial-scale farms. Early and accurate detection of pest infestations enabled by machine learning and image-based diagnosis has the potential to reduce pesticide usage, lower production costs, and support ecological farming methods [3] [9].

  3. LITERATURE REVIEW SORGHUM PEST

    Deep learning has become a key component in automated pest monitoring and crop phenotyping. YOLO-based detectors demonstrate strong performance in sorghum aphid identification, with YOLOv5m achieving high precision and recall and enabling integration into moile or unmanned systems [10], [11]. Reviews confirm the relevance of convolutional architectures for insect pest identification, while highlighting limitations related to dataset size, scalability, and real-time deployment [12], [13].

    For sorghum phenotyping, YOLO and Faster R-CNN models have been applied to UAV and laboratory imagery, with YOLO consistently outperforming Faster R-CNN in panicle detection. The derived phenotypic traits improve yield prediction when incorporated into machine-learning regression

    models [14], [15]. Other studies report highly accurate weed classification using enhanced DenseNet-169 architectures with multi-scale modules and attention mechanisms, demonstrating strong generalization and interpretability [16], [17]. Systematic analyses show growing use of AI in sorghum and millet for land evaluation, disease and weed management, and crop prediction, while adoption in irrigation and climate-impact modeling remains limited [18], [19].

    CNN-based approaches have also been validated for real- time pest detection in maize, where VGG-16 and Cov2D achieve high accuracy on large custom datasets [20], [21]. More recent transformer-based frameworks integrating Swin Transformer, YOLOv9-c, and SegNet-Transformer achieve high performance across classification, localization, and segmentation tasks [22], [23]. Spectral reflectance analyses further demonstrate that aphid infestation in sorghum can be detected through changes in the 550650 nm range, supporting UAS-based remote sensing applications [24].

  4. METHODOLOGY

    This study is composed of two main tasks: (1) the classification of sorghum leaf diseases, and (2) the semantic segmentation of aphid pest clusters. Both tasks leverage supervised deep learning techniques and publicly available datasets, implemented using Python frameworks in Google Colab.

    1. Dataset Preparation

      Two datasets were used:

      • Sorghum Disease Dataset: A labeled image collection of sorghum leaves affected by various diseases such as anthracnose, leaf blight, and sooty stripe. The dataset was organized into class-specific subfolders and used for image classification.
      • AphidSeg Dataset [25]: A large-scale image and mask dataset for aphid infestation in sorghum fields. It includes 54,742 RGB images and corresponding segmentation masks, generated at multiple spatial scales. Only a subset was used for training and evaluation due to size constraints.

        Both datasets were preprocessed using the following pipeline:

      • Image resizing to 224×224 (classification) and 512×512 (segmentation);
      • Normalization (pixel values scaled to [0,1]);
      • Data augmentation (rotation, flipping, contrast) using albumentations for robustness.;
      • Mask alignment and encoding for segmentation labels.
    2. Classification Pipeline

      The classification model was implemented using Tensor- Flow/Keras. The architecture is a Convolutional Neural Network (CNN) comprising:

      • Two convolutional layers (ReLU activation, 3×3 kernel) followed by max-pooling;
      • One fully connected dense layer with dropout (0.5);
      • Softmax output for multi-class classification. The model was trained for 10 epochs with:
      • Optimizer: Adam ( = 0.001);
      • Loss function: categorical cross-entropy;
      • Metrics: accuracy, precision, recall.

        The model achieved over 90% validation accuracy, showing promising results in recognizing leaf diseases visually.

    3. Semantic Segmentation Pipeline

      The segmentation task was handled using the Detectron2 framework by Facebook AI [26]. We used a pre-trained Mask R-CNN architecture fine-tuned on the AphidSeg dataset.

      Key components:

      • Backbone: ResNet-50 with Feature Pyramid Network (FPN);
      • Input size: 512 × 512;
      • Loss functions: mask loss (binary cross-entropy), box loss, and class loss;
      • Training: fine-tuning for 5000 iterations with early stopping based on validation loss.

        We evaluated the segmentation model using:

      • Intersection over Union (IoU);
      • Dice coefficient;
      • Visual inspection of predicted masks overlaid on test images.
    4. Infrastructure and Tools

      All experiments were conducted using Google Colab Pro with GPU acceleration (Tesla T4 or P100). The environment consisted of:

      • Python 3.10, TensorFlow 2, PyTorch 2.0, Detectron2;
      • OpenCV and matplotlib for visualization;
      • 7zip used for extracting multi-part ZIP archives with passwords;
      • GitHub and Google Drive for dataset storage and code sharing.
  5. Exploratory Data Analysis

    Prior to training our models, we conducted an exploratory data analysis to better understand the structure, diversity, and potential biases of the datasets.

    1. Sorghum Disease Dataset

      The sorghum disease dataset includes four classes of leaf conditions: healthy, anthracnose, leaf blight, and sooty stripe.

      • Class distribution: A bar chart (see Fig. 1) reveals moderate class imbalance, with anthracnose.
      • Image characteristics: All images were RGB (see Fig. 2), with varied lighting and background

        conditions. Some noise and labeling inconsistencies were observed.

    2. AphidSeg Dataset

    The AphidSeg dataset consists of RGB images and their corresponding binary masks.

    Patch size distribution: Images were derived from original field photographs and split into patches of various scales.

    Fig. 1. Distribution of images per class in the sorghum disease dataset.

    Fig. 2. Sorghum diseases.

  6. MODEL DESIGN AND TRAINING

    The Convolutional Neural Network (CNN) was implemented using a sequential architecture composed of multiple convolutional layers. Each convolutional layer is followed by a Rectified Linear Unit (ReLU) activation function and a max-pooling operation, which serves to reduce the spatial dimensions and highlight the most informative features (see Fig. 3).

    Following the convolutional blocks, the resulting feature maps were flattened and passed through fully connected

    (dense) layers. A final softmax activation layer was used to perform multi-class classification of the sorghum pest categories.

    To enhance generalization and mitigate overfitting, dropout layers were inserted between the dense layers. The model was trained over 50 epochs with a batch size of 32, using the Adam optimization algorithm. An initial learning rate of 0.001 was set. The training process optimized the categorical cross- entropy loss function, which is well-suited for multiclass classification problems.

    The dataset was divided into training, validation, and test subsets in an 80:10:10 ratio. To further improve the models robustness and simulate real-world variability, data augmentation techniques such as random rotations, horizontal and vertical flips, and brightness adjustments were applied during training.

    captures discriminative features and benefits from an efficient gradient-based optimization.

    This phase is characteristic of successful early-stage convergence, where the network moves quickly toward a sitable region of the parameter space. Training accuracy continues to increase thereafter and eventually saturates near 100%, while validation accuracy stabilizes within the range of 0.900.94, with minor fluctuations observed in later epochs. The progressive divergence between the curves suggests the onset of mild overfitting; however, the consistently high validation accuracy demonstrates that the model retains strong generalization capabilities across most pest classes, even as it increasingly adapts to the training data.

    A comparable trend can be observed in the loss curves (see Fig. 5). Training loss decreases sharply during the initial epochs, reflecting rapid reduction of classification errors, and asymptotically approaches near-zero values as training progresses. In contrast, validation loss converges more slowly and stabilizes between 0.25 and 0.40, without following the continued downward trajectory of the training loss. This gap between the two curves confirms the presence of moderate overfitting, where the model continues to refine its predictions on the training samples but does not obtain equivalent gains on unseen data. The slight oscillations in the validation loss across later epochs indicate sensitivity to visually similar species and class imbalance within the dataset. Nevertheless, the overall stability of the validation loss suggests that the model preserves a reliable predictive behavior despite these challenges.

    Fig. 4. Training and validation accuracy over epochs

    Fig. 3. Classification Model Schema

  7. EXPERIMENTAL RESULTS
    1. Evaluation

      The training dynamics of the model were further analyzed through the accuracy and loss curves, which provide insights into the learning behavior and generalization capacity of the network throughout the optimization process. As shown in Fig. 4, both training and validation accuracy exhibit a steep rise during the first few epochs, indicating that the model rapidly

      Fig. 5. Training and validation loss over epochs

      Taken together, these observations indicate that the model learns efficiently, converges rapidly, and achieves strong predictive performance on unseen data while exhibiting a controlled degree of overfitting. Potential improvements such as enhanced data augmentation strategies, regularization techniques, or increased representation of minority classes could further reduce overfitting and improve the robustness and fairness of the model across all pest categories.

    2. Contributions and Innovations

      This study introduces a pioneering framework that integrates both image classification and segmentation for the detection of sorghum pests within the context of West African agriculture. The proposed system is designed to be lightweight, modular, and easily deployable, with the entire workflow implemented on Google Colab to ensure broad accessibility. This design choice facilitates reproducibility and enables researchers, extension workers, and agricultural stakeholders to adapt and apply the methodology without requiring specialized hardware or advanced computational resources.

      The pipeline consolidates all essential components data preprocessing, model architecture configuration, training and validation procedures, performance evaluation, and inference within a unified notebook environment. This organization not only simplifies experimentation and debugging but also supports collaborative development, version control, and rapid prototyping in low-resource settings. In addition, the framework can be readily extended to incorporate additional pest categories, domain-specific augmentations, or real-time deployment strategies suitable for field applications.

      To the best of our knowledge, this work represents one of the first deep learning-based systems explicitly dedicated to sorghum pest recognition and segmentation in the West African region. By addressing a critical gap in automated crop protection technologies for local farming systems, this study lays the groundwork for future advances in precision agriculture, pest monitoring, and decision-support tools tailored to regional agronomic needs.

    3. Discussion

    The results obtained in this study are promising, as the model demonstrates strong performance in identifying the major pest species that significantly impact sorghum production. Despite these encouraging outcomes, several challenges persist. Class imbalance within the dataset led to reduced precision for minority pest categories, indicating the need for improved data acquisition strategies or the incorporation of synthetic data generation techniques, such as Generative Adversarial Networks (GANs) or oversampling methods like SMOTE. Addressing this imbalance would contribute to more equitable performance across all classes and enhance the reliability of the system in real-world applications.

    In addition, natural variations encountered under field conditions such as inconsistent lighting, heterogeneous backgrounds, and variable camera angles can adversely affect model accuracy. To mitigate these issues, future research should explore more advanced normalization and domain adaptation strategies, as well as the integration of attention- based mechanisms to better capture discriminative regions under uncontrolled environments. Such enhancements could significantly improve the robustness and adaptability of the system during in-field deployment.

    Another promising direction involves the extension of this work toward integrating segmentation models, such as U-Net or Mask R-CNN, to precisely localize pest instances within images. Accurate spatial localization would provide valuable support for decision-making tools and field monitoring systems, enabling more targeted interventions in precision agriculture. This combination of classification and segmentation capabilities has the potential to yield a comprehensive and operationally relevant solution for automated pest management in sorghum farming.

  8. CONCLUSION

This study introduces a deep learning framework for the classification and segmentation of sorghum pests, addressing a critical agricultural challenge in West Africa. Deployed in a lightweight Google Colab environment, the proposed pipeline achieves high accuracy in detecting major pest species while remaining practical for low-resource settings. The results highlight its potential to enable scalable, automated pest monitoring, empowering researchers, extension workers, and farmers to enhance crop protection and productivity.

Despite these achievements, challenges remain. Data imbalance led to lower precision for minority classes, emphasizing the need for expanded, representative datasets or advanced augmentation techniques such as GAN-based synthesis or SMOTE. Environmental variability including inconsistent lighting, complex backgrounds, and uncontrolled camera orientations can also compromise model performance. Tackling these issues through normalization, domain adaptation, or attention mechanisms is expected to further improve system robustness and generalization in real-world applications.

Future work will explore segmentation architectures (U- Net, Mask R-CNN) for precise pest localization, real-time edge inference, multi-modal sensing, and region-specific datasets to support broader adoption

REFERENCES

  1. Andekelile Mwamahonje, John Saviour Yaw Eleblu , Kwadwo Ofori 1, Santosh Deshpande, Tileye Feyissa, and William Elisha Bakuza, Sorghum Production Constraints, Trait Preferences, and Strategies to Combat Drought in Tanzania, Sustainability 2021, 13, 12942. DOI: https://doi.org/10.3390/su132312942
  2. Wentao Zhou, Yashwanth Arcot, Raul F. Medina, Julio Bernal, Luis Cisneros-Zevallos, and Mustafa E. S. Akbulut, Integrated Pest Management: An Update on the Sustainability Approach to Crop Protection,ACS Omega 2024, 9, 4113041147, DOI: https://doi.org/10.1021/acsomega.4c06628
  3. Andreas Kamilaris, and Francesc X. Prenafeta-Boldú, Deep Learning in Agriculture: A Survey, Computers and Electronics in Agriculture, ELSEVIER, ScienceDirect, vol. 147, pp. 70-90, 2018,

    DOI: https://doi.org/10.1016/j.compag.2018.02.016

  4. Joshua Benjamin, Oluwadamilola Idowu, Oreoluwa Khadijat Babalola, Emmanuel Victor Oziegbe, David Olayinka Oyedokun, Aanuoluwapo Mike Akinyemi, and Aminat Adebayo, Cereal production in Africa: the threat of certain pests and weeds in a changing climatea review, Agriculture & Food Security, Springer, vol. 13, Article number 18, 2024, DOI: https://doi.org/10.1186/s40066-024-00470-8
  5. FAO, World Food and Agriculture Statistical Pocketbook 2023, 138

    p., ISBN: 978-92-5-138261-5, DOI: https://doi.org/10.4060/cc8165en

  6. Petros Chavula, Successes of Integrated Pest Management in Sorghum Production: A Review, International Journal of Academic and Applied Research (IJAAR), vol. 6 Issue 7, pp. 110-115, 2022
  7. M. Sétamou, F. Schulthess, G. Goergen, H.-M. Poehling and C Borgemeister, Natural enemies of the maize cob borer, Mussidia nigrivenella (Lepidoptera: Pyralidae) in Benin, West Africa, Bulletin of Entomological Research , Volume 92 , Issue 4 , pp. 343 349, 2002,

    DOI: https://doi.org/10.1079/BER2002175

  8. Jules Pretty, and Zareen Pervez Bharucha, Sustainable Intensification of Agriculture: Greening the World’s Food Economy, Book, 196 p, 2018, London, DOI: https://doi.org/10.4324/9781138638044
  9. Khadija Javed, Guy Smagghe, Qi Wang, and Humayun Javed, Artificial intelligence in crop protection: Revolutionizing agriculture for a sustainable future, Information Processing in Agriculture, ELSEVIER, ScienceDirect, 2025,

    DOI: https://doi.org/10.1016/j.inpa.2025.12.003

  10. Ivan Grijalva, H. Braden Adams, Nicholas Clark, Brian McCornack, Yong Wang, Detecting and counting sorghum aphid alates using smart computer vision models, Ecological Informatics, ELSEVIER,

    ScienceDirect, 2024, DOI: https://doi.org/10.1016/j.ecoinf.2024.102540

  11. Ivan Grijalva, Brian J. Spiesman, and Brian McCornack, Computer vision model for sorghum aphid detection using deep learning, Journal of Agriculture and Food Research, ELSEVIER, ScienceDirect, 2023,

    DOI: https://doi.org/10.1016/j.jafr.2023.100652

  12. Sourav Chakrabarty, Pathour Rajendra Shashank, Chandan Kumar Deb b, Md. Ashraful Haque, Pradyuman Thakur, Deeba Kamil, Sudeep Marwaha, Mukesh Kumar Dhillon, Deep learning-based accurate detection of insects and damage in cruciferous crops using YOLOv5, Smart Agricultural Technology, ESLVIER, ScienceDirect, 2024, DOI: https://doi.org/10.1016/j.atech.2024.100663
  13. Sourav Chakrabarty, Chandan Kumar Deb, Sudeep Marwaha, and Md. Ashraful Haque, Deeba Kamil, Raju Bheemanahalli, Pathour Rajendra Shashank, Application of artificial intelligence in insect pest identification – A review, Artificial Intelligence in Agriculture, KeAi,

    ScienceDirect, Vol. 16, Issue 1, pp. 44-61, 2026, DOI: https://doi.org/10.1016/j.aiia.2025.06.005

  14. Gelana Keno Beyene, Amin Mohammed Yones, Ahmed Beyan Heji, Mukaddes Kayim, Insect Pests and Diseases in Stored Sorghum (Sorghum bicolor L.) and Maize (Zea mays L.) in West Hararghe, Ethiopia, Wiley, Advances in Agriculture, 2024, DOI: https://doi.org/10.1155/2024/6650317
  15. Md Abdullah Al Bari, Aliva Bakshi, Jahid Chowdhury Choton, Swaraj Pramanik, Trevor D. Witt, Doina Caragea, Scott Bean, Krishna Jagadish, Terry Felderhoff, Deep Learning for Sorghum Yield Forecasting using Uncrewed Aerial Systems and Lab-Derived Imagery, BioRxiv, 2025,

    DOI: https://doi.org/10.1101/2025.07.11.663520

  16. Sandhya Devi Ramiah Subburaj, Cowshik Eswaramoorthy, Vishnu Gunasekaran Latha, and Rakshan Kaarthi Palanisamy Chinnasamy, Efficient Pest Detection Through Advanced Machine Learning Technique, Current Agriculture Research Journal, ISSN: 2347-4688,

    Vol. 12, No.(3) 2024, pg. 1127-1134

  17. Armaano Ajay; S. Sandosh, Aryan Saji, and Harsh Agarwal, An Explainable Deep Learning Framework for Sorghum Weed Classification Using Multi-Scale Feature Enhanced DenseNet, IEEE Access, vol. 13, 2025,

    DOI: https://doi.org/10.1109/ACCESS.2025.3538937

  18. Chaoxin Wang, Ivan Grijalva, Doina Caragea, and Brian McCornack, Detecting common coccinellids found in sorghum using deep learning models, Nature, Scientific Reports, 2023,

    DOI: https://doi.org/10.1038/s41598-023-36738-5

  19. Innocent Kutyauripo, Munyaradzi Rushambwa, and Rajkumar Palaniappan, Applications of Artificial Intelligence in sorghum and millet farming, Circular Agricultural Systems, vol. 5, 2025, DOI: https://doi.org/10.48130/cas-0025-0001
  20. Zhe Lin, and Wenxuan Guo, Sorghum Panicle Detection and Counting Using Unmanned Aerial System Images and Deep Learning, Front. Plant Sci. 11:534853, 2020,

    DOI: https://doi.org/10.3389/fpls.2020.534853

  21. Syed Ijaz Ul Haq , Ali Raza, Yubin Lan and Shizhou Wang, Identification of Pest Attack on Corn Crops Using Machine Learning Mechniques, Learning Techniques. Eng. Proc. 2023, 56, 183. https://doi.org/10.3390/ASEC2023-15953
  22. E. Tunca1 · E. S. Köksal, · H. Akay, · E. Öztürk, S. Ç. Taner, Novel machine learning framework for highresolution sorghum biomass estimation using multitemporal UAV imagery, International Journal of Environmental Science and Technology (2025) 22:1367313688, DOI: https://doi.org/10.1007/s13762-025-06498-y
  23. Amin J, Zahra R, Maryum A, Sarwar A, Zafar A, and Kim S-H, Sorghum cropsclassification and segmentation using shifted window transformer neural network and localization based on (YOLO)v9-path aggregation network, Front. PlantSci. 16:1586865, 2025 DOL: https://doi.org/10.3389/fpls.2025.1586865
  24. Grace Craigie, Trevor Hefley, Ivan Grijalva, Ignacio A. Ciampitti, Douglas G. Goodin, and Brian McCornack, Detecting sorghum aphid infestation in grain sorghum using leaf spectral response, Scientific Reports, 2024, DOI: https://doi.org/10.1038/s41598-024-64841-8
  25. Wang, Richard, Aphid Cluster Segmentation Dataset, Harvard Dataverse, 2023, DOI: https://doi.org/10.7910/DVN/N3YJXG
  26. Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick, Detectron2, 2019,

https://github.com/facebookresearch/detectron2/