🏆
International Peer-Reviewed Publisher
Serving Researchers Since 2012

Automated Sugarcane Disease Detection Using AI-Driven Image Analysis

DOI : https://doi.org/10.5281/zenodo.19093070
Download Full-Text PDF Cite this Publication

Text Only Version

Automated Sugarcane Disease Detection Using AI-Driven Image Analysis

Ujjwal Singh

Dept. Computer Science & Engineering Babu Banarasi Das Institute of Technology & Management (Dr. A P J Abdul Kalam Technical University)

Lucknow, India

Vishesh Kumar Singh

Dept. Computer Science & Engineering Babu Banarasi Das Institute of Technology & Management (Dr. A P J Abdul Kalam Technical University)

Lucknow, India

Vishal Yadav

Dept. Computer Science & Engineering Babu Banarasi Das Institute of Technology & Management (Dr. A P J Abdul Kalam Technical University)

Lucknow, India

Guided by: Mr. Chinmay Shukla

Assistant Professor

Dept. Computer Science & Engineering Babu Banarasi Das Institute of Technology & Management (Dr. A P J Abdul Kalam Technical University)

Lucknow, India

Abstract: – Sugarcane is one of the most economically significant crops in countries such as India and Brazil. However, its productivity is heavily affected by fungal, bacterial, and viral infections including red rot, smut, and leaf scald. Traditional disease identification depends mainly on manual field inspection, which is time-consuming and often inaccurate during early stages. With the rapid growth of Artificial Intelligence (AI) and deep learning, image- based crop disease detection has emerged as a promising solution. This paper reviews recent advancements in AI-driven sugarcane disease detection using digital image processing and convolutional neural networks (CNNs). It analyses datasets, preprocessing techniques, model architectures, evaluation metrics, and practical deployment challenges. The review also highlights research gaps and future opportunities in integrating AI with IoT and drone technologies for precision agriculture.

Keywords: Artificial Intelligence, Sugarcane Disease, Deep Learning, Image Processing, Precision Agriculture, CNN.

  1. INTRODUCTION

    Sugarcane diseases such as red rot, smut, rust, leaf scald, and mosaic virus pose a serious threat to global sugar production. These diseases not only reduce crop yield but also impact sugar recovery rates and overall profitability for farmers [11][12][13]. Early detection is essential because internal infection often begins before visible symptoms appear on leaves or stems, making late-stage diagnosis ineffective for disease control [14][15]. Traditionally, farmers depend on visual judgment or laboratory testing, which can delay intervention and increase economic losses [16][17]. As agricultural land areas expand and labour availability decreases, manual monitoring becomes increasingly impractical [18].

    The rapid advancement of computer vision and deep learning has opened new possibilities for automated disease recognition. AI-based image analysis systems eliminate the dependency on handcrafted features and expert-driven diagnosis by enabling end-to-end learning from raw leaf images [19][20]. Deep neural networks automatically extract hierarchical features such as texture, colour variation, lesion shape, and infection patterns, significantly

    improving detection accuracy [21][22]. Transfer learning approaches, where pretrained models are fine-tuned on crop-specific datasets, have further enhanced classification results, especially when training data is limited [23][24]. Despite these technological improvements, challenges remain in ensuring robustness under real-field environmental conditions [25][26].

  2. LITERATURE REVIEW

    1. Traditional Image Processing and Machine Learning Approaches

      Early research in plant disease detection primarily focused on classical image processing techniques. Methods such as color space conversion, histogram equalization, threshold- based segmentation, and Gray-Level Co- occurrence Matrix (GLCM) texture analysis were commonly used to extract disease-related features [27][28]. After manual feature extraction, machine learning classifiers including Support Vector Machines (SVM), K-Nearest Neighbours (KNN), Naïve Bayes, and Decision Trees were applied for classification tasks [29][30]. These approaches achieved moderate success rates, generally ranging between 70% and 85% accuracy, depending on dataset quality and environmental consistency [31][32].

      Although traditional models provided a foundation for automated disease detection, their dependence on handcrafted features limited scalability. Feature engineering required domain expertise and struggled to handle variations in lighting, background complexity, and overlapping leaf structures [33][34]. As agricultural datasets grew in diversity, researchers recognized the need for models capable of automatic feature learning and improved generalization [35].

    2. Deep Learning-Based Disease Detection

      The introduction of deep learning marked a significant transition in plant disease research. Convolutional Neural Networks (CNNs) became widely adopted due to their ability to automatically extract multi-level spatial features from images [5][6].

      These models demonstrated significant improvement over traditional methods, often achieving accuracy levels above 90% under controlled experimental conditions [38][39].

      Transfer learning has proven especially beneficial in sugarcane disease detection because publicly available large-scale datasets are limited. By leveraging pretrained weights from large datasets such as ImageNet, researchers reduced training time and improved convergence performance [23][40]. Data augmentation techniques including rotation, flipping, scaling, brightness adjustment, and noise injection were also applied to enhance model robustness and prevent overfitting [24][31].

      However, despite high laboratory accuracy, many studies reported performance degradation when models were tested under real-field conditions. Variations in illumination, camera quality, leaf orientation, and mixed infections introduce noise that affects model reliability [25][32]. This highlights the need for domain adaptation strategies and larger annotated datasets specific to sugarcane crops [9][11].

    3. Dataset Limitations and Practical Deployment Challenges

      One of the most frequently cited challenges across reviewed studies is the absence of standardized sugarcane disease datasets [9][10]. Unlike crops such as tomato or potato, which have well-established benchmark datasets, sugarcane image repositories are often locally collected and lack uniform annotation standards [12][13]. This limits cross-study comparison and reproducibility.

      Environmental variability significantly impacts model performance. Field images may contain soil backgrounds, overlapping leaves, shadows, and inconsistent lighting conditions that complicate segmentation [25][33]. Additionally, similar visual symptoms among different diseases create classification ambiguity, requiring advanced feature discrimination capabilities [14][35]. Researchers have suggested combining spectral Researchers have suggested combining spectralcombining spectral

      combining spectral imaging, hyperspectral data, or multi-modal sensing with RGB images to improve diagnostic precision [18][20].

      Deployment considerations also include computational constraints and accessibility. While deep CNNs provide high accuracy, their large parameter sizes make them unsuitable for direct deployment on low-cost smartphones used by farmers [37][39]. Lightweight architectures and edge-computing strategies have therefore gained research attention for practical implementation [36][38].

    4. Integration with Smart Farming Technologies

    Recent advancements emphasize integrating AI- based disease detection with Internet of Things (IoT) devices and drone-based monitoring systems [18][19]. UAV-based image acquisition enables large-scale monitoring of sugarcane plantations, reducing the need for manual field inspection [20][26]. Real-time disease alerts generated through cloud-connected AI platforms can assist farmers in taking immediate corrective measures [21][24].

    Despite these innovations, challenges related to network dependency, hardware costs, and data privacy remain concerns for widespread adoption in rural agricultural settings [22][34]. Future systems must prioritize affordability, energy efficiency, and offline functionality to ensure inclusivity and scalability [17][35].

  3. RESEARCH GAPS AND FUTURE DIRECTIONS

    The collective findings from more than forty reviewed studies indicate that although AI-driven sugarcane disease detection shows promising accuracy, several research gaps persist. There is a critical need for publicly available, large-scale annotated sugarcane datasets to ensure standardization and benchmarking [9][12]. Additionally, most studies focus on single- disease classification rather than multi-disease or severity-level prediction [11][14]. Explainability in deep learning models remains limited, making it difficult for farmers and agricultural officers to trust automated predictions [32][33].

    Real-field validation studies are comparatively fewer, highlighting the gap between laboratory experimentation and practical agricultural deployment [25][26].

    Future research should focus on hybrid models combining CNN architectures with attention mechanisms, explainable AI frameworks, and domain adaptation techniques. Integration with mobile applications and IoT-enabled advisory systems can further enhance usability and real- time responsiveness [18][21]. The combination of lightweight neural networks with edge computing devices offers a promising direction for scalable smart agriculture solutions [36][40].

  4. PROPOSED SYSTEM AND METHODOLOGY

    The proposed system is designed as an intelligent, end-to-end framework for automatic sugarcane disease detection using AI-driven image analysis. The system integrates image acquisition, preprocessing, deep learning-based classification, and a farmer advisory interface into a unified architecture. The objective is not only to classify diseases accurately but also to provide actionable insights that assist farmers in early intervention. The overall design is inspired by recent advancements in computer vision and precision agriculture systems discussed in prior studies [1][5][9][18].

    The system begins with an image acquisition layer where sugarcane leaf images are captured using smartphone cameras or unmanned aerial vehicles (UAVs). Field-level images are collected under natural lighting conditions to ensure realistic dataset representation. In large- scale plantations, drone-based imaging systems can capture high-resolution aerial images for disease monitoring across wide agricultural areas [18][20][26]. This approach reduces manual inspection time and enables early large- area screening.

    Once images are captured, they pass through a preprocessing pipeline. Preprocessing plays a critical role in improving model performance because raw agricultural images often contain background noise, uneven illumination, and

    irrelevant objects. The preprocessing stage includes image resizing, normalization, noise reduction using Gaussian filtering, and contrast enhancement techniques. Background segmentation is applied to isolate the infected leaf region from soil or surrounding vegetation using thresholding or contour-based methods [27][28][33]. This step ensures that the learning model focuses only on disease-related features rather than environmental artifacts.

    After preprocessing, data augmentation techniques are applied to increase dataset diversity and prevent overfitting. Augmentation methods such as rotation, horizontal and vertical flipping, scaling, zooming, and brightness adjustments simulate real-field variability [24][31][39]. These transformations help the model generalize better when exposed to new unseen field conditions. The augmented dataset is then divided into training, validation, and testing sets, typically following an 80:10:10 or 70:15:15 ratio to ensure unbiased evaluation.

    For the classification module, the system employs a deep learning architecture based on transfer learning. Pretrained Convolutional Neural Network models such as ResNet, VGG16, and MobileNet are fine-tuned using sugarcane disease images [7][36][38]. Transfer learning is preferred because sugarcane datasets are relatively smaller compared to general image datasets. The final fully connected layers of the pretrained model are modified to match the number of sugarcane disease classes, such as red rot, smut, rust, mosaic, and healthy leaf categories.

    During training, categorical cross-entropy loss is used as the optimization objective, and adaptive optimizers such as Adam are applied to improve convergence speed. Early stopping and dropout layers are incorporated to reduce overfitting. Model performance is evaluated using accuracy, precision, recall, F1-score, and confusion matrix analysis to ensure balanced classification performance across all disease categories [31][34]. Additionally, Receiver Operating Characteristic (ROC) curves may be used for multi-class evaluation where applicable.

    To enhance transparency and interpretability, the proposed system integrates explainable AI techniques such as Gradient-weighted Class Activation Mapping (Grad-CAM). This technique highlights infected regions on the leaf image, allowing users to visually understand which parts of the image contributed to the prediction [32][37]. Interpretability improves farmer trust and supports agricultural experts in validating automated predictions.

    After classification, the decision-support layer provides actionable recommendations. If a disease is detected, the system suggests appropriate preventive or corrective measures, including recommended pesticides, fungicides, or agronomic practices. This advisory module can be integrated into a web or mobile application to ensure accessibility for farmers in rural areas [18][21]. For large-scale deployment, cloud-based storage may maintain historical disease records, enabling trend analysis and predictive monitoring.

    For scalability, the system architecture supports two deployment modes. In cloud-based deployment, images are uploaded to a server where high-performance GPUs process the classification task. In edge-based deployment, lightweight models such as MobileNet are optimized for mobile devices, enabling offline detection in low-connectivity rural regions [36][39]. This dual deployment strategy ensures both computational efficiency and accessibility.

    The proposed methodology emphasizes robustness under real-field conditions. Unlike many laboratory-based experiments, the system is designed to handle varying illumination, mixed infections, and natural environmental complexity. Continuous retraining using new field data ensures adaptive learning and long- term reliability [25][26]. Integration with IoT- based environmental sensors can further enhance prediction by correlating disease occurrence with humidity, temperature, and soil conditions [19][20]. In summary, the proposed system combines image acquisition, preprocessing, transfer learning-based

    classification, explainable AI, and farmer advisory integration into a comprehensive intelligent framework. By bridging the gap between research accuracy and practical deployment, the methodology aims to deliver a scalable, accurate, and user-friendly sugarcane disease detection solution suitable for precision agriculture environments [1][40].

    Fig.-4.1 Workflow of Sugarcane Disease Detection Model

    Workflow Summary

    The architecture can be broken down into five distinct phases:

    1. Image Acquisition Layer

    1. Sources: Smartphone cameras (individual farmers) or UAVs/Drones (large-scale plantations).

    2. Environment: Real-world field conditions with natural lighting.

      2. Preprocessing & Augmentation

      1. Refinement: Gaussian filtering (noise reduction) and contrast enhancement.

      2. Segmentation: Isolating the leaf from the background (soil/weeds).

      3. Augmentation: Scaling, flipping, and rotation to build a robust dataset and prevent overfitting.

    3. Deep Learning Classification (Core Engine)

    1. Transfer Learning: Utilizing pretrained models (ResNet, VGG16, or MobileNet).

    2. Optimization: Fine-tuning with categorical cross-entropy loss and the Adam optimizer.

    3. Classes: Red Rot, Smut, Rust, Mosaic, and Healthy.

    4. Interpretability & Validation (XAI)

      • Grad-CAM: Generating heatmaps to show the farmer why the AI flagged a specific spot on a leaf.

      • Evaluation: Using a Confusion Matrix and F1-score to ensure the model isn't just "guessing" based on high-frequency classes.

    5. Deployment & Advisory

      • Dual-Mode Deployment: * Cloud: For heavy processing and historical trend analysis.

        Edge/Mobile:Lightweight MobileNet models for offline use in rural areas.

      • Actionable Insights: Direct recommendations for pesticides or agronomic practices.

  5. CONCLUSION

    The rapid advancement of Artificial Intelligence in agriculture has opened new opportunities for addressing long-standing challenges in crop health monitoring. Sugarcane, being a highly valuable commercial crop in countries such as India and Brazil, demands efficient disease management strategies to sustain productivity and farmer income. Traditional disease detection methods, largely dependent on manual inspection and expert diagnosis, have proven insufficient in large-scale agricultural environments due to their time-consuming, subjective, and delayed nature. In contrast, AI- driven image analysis provides a scalable and objective alternative capable of detecting diseases at early stages

    with high precision.

    This review comprehensively analyzed existing research on sugarcane disease detection using image processing and deep learning techniques. It is evident that early image-based approaches relying on handcrafted features and classical machine learning algorithms laid the foundation for automated crop monitoring systems. However, their limitations in handling environmental variability and complex infection patterns restricted their practical applicability. The emergence of deep learning, particularly Convolutional Neural Networks, significantly improved detection accuracy by enabling automatic feature extraction and hierarchical learning. Transfer learning further strengthened performance by leveraging pretrained architectures such as ResNet, VGG16, and MobileNet, especially in scenarios where sugarcane-specific datasets are limited.

    Despite achieving high classification accuracy in experimental environments, several practical challenges remain. Real-field agricultural conditions introduce variability in lighting, background complexity, leaf overlap, and mixed infections, which can reduce model robustness. The lack of standardized and publicly available sugarcane disease datasets also limits benchmarking and cross-study comparison. Additionally, the majority of research focuses primarily on classification accuracy rather than long-term deployment, scalability, and farmer usability. Bridging the gap between laboratory validation and real- world implementation remains a critical priority for future research.

    The proposed AI-driven framework presented in this review emphasizes a holistic system design that integrates image acquisition, preprocessing, data augmentation, deep learning classification, explainable AI techniques, and advisory support modules. By incorporating visualization tools such as activation mapping, the system enhances interpretability and builds user trust. Deployment strategies that combine cloud computing with edge-based lightweight models ensure accessibility even in

    low-connectivity rural regions. Furthermore, integration with IoT-based environmental monitoring systems and drone-based surveillance can significantly expand the scope of disease detection from individual leaves to large plantation-level health monitoring.

    Looking forward, future research should focus on creating large annotated sugarcane datasets collected under diverse field conditions. Multi-disease classification models capable of identifying disease severity levels and co- infections will improve practical relevance. The integration of hyperspectral imaging, attention mechanisms, and explainable AI frameworks can further enhance diagnostic precision and transparency. Additionally, developing farmer-centric mobile applications with multilingual support and offline functionality will promote widespread adoption among rural communities.

    In conclusion, AI-driven image analysis represents a transformative solution for sugarcane disease management. While technological advancements have already demonstrated strong potential in improving detection accuracy, sustainable implementation requires addressing dataset limitations, environmental variability, model interpretability, and cost-effective deployment. With continuous research, interdisciplinary collaboration, and field-level validation, intelligent disease detection systems can significantly reduce crop losses, improve agricultural productivity, and contribute to the broader vision of precision and sustainable agriculture.

  6. ACKNOWLEDGMENT

    The author would like to thank the Babu Banarasi Das Institute of Technology and Management, Lucknow, Department of Computer Science & Engineering for providing the academic support, technical resources, and direction that this project required. Additionally, the author thanks the professors and colleagues whose insightful comments, support, and helpful criticism greatly enhanced the calibre and applicability

    of this research project. Their collaboration was crucial to the successful integration of machine learning methods with practical healthcare applications. For her advice and assistance during this research, the author would like to thank Mr. Chinmay Shukla, Assistant Professor, Department of CSE, BBDITM.

  7. REFERENCES

  1. S. Mohanty, D. Hughes, and M. Salathé, Using Deep Learning for Image-Based Plant Disease Detection, Frontiers in Plant Science, vol. 7, pp. 110, 2016.

  2. P. Revathi and M. Hemalatha, Classification of Cotton Leaf Spot Diseases Using Image Processing, Engineering Applications of Artificial Intelligence, vol. 24, no. 4, pp. 1 8, 2012.

  3. J. G. A. Barbedo, Digital Image Processing Techniques for Detecting, Quantifying and Classifying Plant Diseases, SpringerPlus, vol. 2, no. 1, pp. 112, 2013.

  4. R. Pydipati, T. Burks, and W. Lee, Identification of Citrus Disease Using Color Texture Features, Computers and Electronics in Agriculture, vol. 52, no. 12, pp. 4959, 2006.

  5. A. Kamilaris and F. X. Prenafeta-Boldú, Deep Learning in Agriculture: A Survey, Computers and Electonics in Agriculture, vol. 147, pp. 7090, 2018.

  6. K. P. Ferentinos, Deep Learning Models for Plant Disease Detection, Computers and Electronics in Agriculture, vol. 145, pp. 311318, 2018.

  7. K. He, X. Zhang, S. Ren, and J. Sun, Deep Residual Learning for Image Recognition, in Proc. IEEE CVPR, 2016.

  8. K. Simonyan and A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, in Proc. ICLR, 2015.

  9. S. Sladojevic et al., Deep Neural Networks Based Recognition of Plant Diseases, Computational Intelligence and Neuroscience, 2016.

  10. H. Brahimi, A. Boukhalfa, and A. Moussaoui, Deep Learning for Tomato Diseases: Classification and Symptoms Visualization, Applied Artificial Intelligence, vol. 31, no. 4, 2017.

  11. FAO, Sugarcane Production Statistics, Food and Agriculture Organization Report, 2022.

  12. R. Singh and P. Singh, A Review of Sugarcane Diseases and Their Impact on Yield, International Journal of Plant Pathology, vol. 9, no. 2, 2021.

  13. D. Sharma et al., Major Fungal Diseases of Sugarcane: A Review, Journal of Crop Protection, 2020.

  14. M. Kumar et al., Detection of Red Rot Disease in Sugarcane Using Image Processing, IEEE Access, 2021.

  15. S. Patil and V. Bodhe, Leaf Disease Severity Measurement Using Image Processing, International Journal of Engineering and Technology, 2011.

  16. A. P. Alves et al., Sugarcane Smut Detection Using Computer Vision, Biosystems Engineering, 2019.

  17. World Bank, Digital Agriculture Transformation Report, 2021.

  18. A. Kamilaris et al., IoT in Agriculture: A Review, Sensors, vol. 18, no. 8, 2018.

  19. L. Li, Q. Zhang, and D. Huang, A Review of Imaging Techniques for Plant Phenotyping, Sensors, 2014.

  20. Y. Zhang and L. Kovacs, The Application of UAVs in Agriculture, Precision Agriculture, 2012.

  21. J. Too et al., Comparative Study of Fine-Tuning Deep Learning Models for Plant Disease Identification, Computers and Electronics in Agriculture, 2019.

  22. S. Hasan et al., Deep Learning-Based Plant Disease Detection: A Review, IEEE Access, 2020.

  23. J. Yosinski et al., How Transferable Are Features in Deep Neural Networks? in Proc. NIPS, 2014.

    Features in Deep Neural Networks? in Proc. NIPS, 2014.

  24. A. Shorten and T. Khoshgoftaar, A Survey on Image Data Augmentation, Journal of Big Data, 2019.

  25. R. R. Selvaraj et al., Challenges in Real-Field Crop Disease Detection, Agricultural Systems, 2022.

  26. M. Rahman et al., Smart Farming Using AI and IoT,

    IEEE Internet of Things Journal, 2021.

  27. R. Gonzalez and R. Woods, Digital Image Processing, 4th ed., Pearson, 2018.

  28. R. Haralick et al., Textural Features for Image Classification, IEEE Transactions on Systems, Man, and Cybernetics, 1973.

  29. C. Cortes and V. Vapnik, Support-Vector Networks,

    Machine Learning, 1995.

  30. T. Cover and P. Hart, Nearest Neighbor Pattern Classification, IEEE Transactions on Information Theory, 1967.

  31. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, MIT Press, 2016.

  32. R. R. Selvaraju et al., Grad-CAM: Visual Explanations from Deep Networks, in Proc. IEEE ICCV, 2017.

  33. J. Deng et al., ImageNet: A Large-Scale Hierarchical Image Database, in Proc. IEEE CVPR, 2009.

  34. D. Chicco and G. Jurman, The Advantages of the Matthews Correlation Coefficient, BMC Genomics, 2020.

  35. S. Thapa et al., Deep Learning for Plant Disease Detection: A Review, Plant Methods, 2020.

  36. A. Howard et al., MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications, 2017.

  37. M. T. Ribeiro et al., Why Should I Trust You? Explaining the Predictions of Any Classifier, in Proc. ACM KDD, 2016.

  38. C. Szegedy et al., Rethinking the Inception Inception Architecture for Computer Vision, in Proc. IEEE CVPR, 2016.

  39. M. Tan and Q. Le, EfficientNet: Rethinking Model Scaling for CNNs, in Proc. ICML, 2019.

  40. F. Chollet, Xception: Deep Learning with Depthwise Separable Convolutions, in Proc. IEEE CVPR, 2017.