DOI : https://doi.org/10.5281/zenodo.20026013
- Open Access

- Authors : Sushanta Kumar Mohanty, Abha Mahalwar, Sidhartha Sankar Dora, Chandrakant Mallick
- Paper ID : IJERTV15IS043789
- Volume & Issue : Volume 15, Issue 04 , April – 2026
- Published (First Online): 04-05-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
UAV-Based Crop Disease Detection Using Hybrid AI and IoT Integration
Sushanta Kumar Mohanty, Abha Mahalwar
Department of Computer Science, ISBM University, Chhattisgarh, India
Sidhartha Sankar Dora
School Of Engineering and Technology, Driems University, India
Chandrakant Mallick
Department of Computer Science & Engineering, GITA, Bhubaneswar, India
Abstract – Timely identification of crop diseases is essential for improving agricultural productivity and ensuring food security. This paper presents an intelligent crop disease detection framework that integrates Unmanned Aerial Vehicles (UAVs), Internet of Things (IoT) sensors, and hybrid deep learning techniques. UAVs are employed to capture high-resolution aerial imagery, while IoT devices collect environmental parameters such as temperature, humidity, and soil moisture. A hybrid Convolutional Neural NetworkLong Short-Term Memory (CNNLSTM) model is developed to learn both spatial and temporal patterns from multimodal data. The proposed approach achieves an accuracy of 97.2%, outperforming conventional machine learning and standalone deep learning models. Furthermore, the system incorporates energy-efficient UAV operation and explainable AI methods to enhance interpretability. Experimental evaluation demonstrates that the proposed framework is reliable, scalable, and suitable for real-world precision agriculture applications.
Keywords – UAV, Crop Disease Detection, Deep Learning, CNNLSTM, IoT, Precision Agriculture, Edge Computing
-
INTRODUCTION
Agriculture remains a cornerstone of global economic stability and food supply [1]. However, crop diseases pose a significant threat to yield and quality [2]. Traditional disease detection methods rely on manual inspection, which is often inefficient, labour-intensive, and prone to inaccuracies [3].
Recent technological advancements have introduced UAVs as effective tools for large-scale agricultural monitoring. UAVs provide detailed aerial imagery, enabling farmers to detect anomalies in crop health [4]. Despite these advancements, most existing systems rely solely on image-based analysis, which does not fully capture environmental factors influencing plant diseases.
To address these limitations, this study proposes a comprehensive system that integrates UAV imaging with IoT sensor data and advanced deep learning models. The objective is to develop a robust, accurate, and real-time disease detection framework.
-
LITERATURE REVIEW
UAV-based crop monitoring has gained considerable attention in recent years. Deep learning models, particularly CNNs, have shown promising results in identifying plant diseases from leaf images [5]. However, these models often require large datasets and high computational resources. Several studies have explored multispectral imaging to detect early-stage diseases, but such approaches increase system cost and complexity. Transformer-based architectures have demonstrated superior performance, yet their deployment in real-time UAV systems remains challenging due to computational demands. The existing research highlights the need for a balanced approach that combines accuracy, efficiency, and scalability. This section reviews key contributions from existing literature.
-
UAV-Based Crop Disease Detection
Recent advancements in precision agriculture have significantly leveraged Unmanned Aerial Vehicles (UAVs) combined with artificial intelligence for efficient crop disease detection [6].
-
UAV-Based Remote Sensing for Crop Monitoring
Early studies emphasize the importance of UAVs in agricultural monitoring due to their ability to capture high-resolution spatial, temporal, and spectral data [7]. UAVs equipped with RGB, multispectral, and hyperspectral sensors provide detailed crop information, enabling early disease identification and management. Research highlights that UAV-based remote sensing outperforms traditional manual inspection methods, which are time-consuming and less scalable for large farms.
-
Machine Learning Approaches for Disease Detection
Early methods concentrated on machine learning methods like: SVMs, or support vector machines, Random Forest, K-Nearest Neighbors etc. These methods rely on handcrafted features [8]. However, their performance is limited when dealing with complex disease patterns and large-scale datasets.
-
Deep Learning-Based UAV Image Analysis
Recent studies have shifted toward deep learning due to its superior feature extraction capability. Convolutional Neural Networks (CNNs), including architectures such as ResNet and VGG, have been widely used for classification and segmentation of crop diseases from UAV imagery [9]. A comprehensive review by Shahi et al. highlights that deep learning significantly improves detection accuracy compared to traditional methods, especially when combined with high-quality UAV data [10]. Similarly, Zhu et al. (2024) demonstrated that deep learning models can effectively detect both diseases and pests using UAV-based remote sensing images, enabling intelligent agricultural systems [11].
-
Multispectral and Hyperspectral Imaging
Several studies emphasize the importance of multispectral and hyperspectral imaging for early disease detection. These sensors capture information beyond visible light, allowing detection of physiological changes in plants before symptoms become visible. Hyperspectral imaging combined with deep learning has shown promising results in identifying subtle disease patterns and stress conditions in crops [12].
-
Integration of UAV with AI and IoT
Recent research trends focus on integrating UAV systems with IoT and AI to enhance decision-making. UAVs collect aerial data, while ground-based sensors provide environmental parameters such as humidity, temperature, and soil moisture. This integration improves disease prediction accuracy and enables real-time monitoring in smart agriculture systems.
-
Challenges Identified in Literature
Despite significant progress, several challenges remain:
-
Limited availability of labelled UAV datasets
-
Variability in lighting and environmental conditions
-
High computational requirements of deep learning models
-
Difficulty in real-time processing
-
Generalization issues across different crops and regions
These challenges highlight the need for more robust, scalable, and energy-efficient systems.
Table 1: Comparison of existing UAV-Based Crop Disease Detection methods
Reference
Year
Data Source
Method Used
Model Type
Accuracy / Performance
Key Contribution
Limitation
Shahi et al.[10]
2023
UAV
datasets
Deep Learning (CNN,
CNN-based
High accuracy (improved
over ML)
Demonstrates DL
superiority over
Requires large labelled datasets
Transfer Learning)
traditional ML
Zhu et al. [11]
2024
UAV
remote sensing images
Deep Learning for disease &
pest detection
CNN /
Hybrid
High detection efficiency
Integration of UAV + AI
for intelligent agriculture
Computational complexity
Dhuli et al. [14]
2023
Real-time UAV
wheat images
CNN +
multispectral data
CNN
High accuracy + 50% less computation
Real-time disease detection system
Limited to wheat crop
Zhang et al.[15]
2025
UAV
crop images
CNN + State Space Model (VSS)
Hybrid (CNN + SSM)
Improved segmentation accuracy
Captures both local & global features
efficiently
High model complexity
Bouguettaya et al.[16]
2022
UAV
imagery
Deep Learning survey
CNN-based
Review
Overview of DL
techniques in UAV
agriculture
Lacks real-time implementation
Prasad et al. [17]
2021
UAV +
synthetic data
GAN + ML
classifier
Hybrid
~96.3%
accuracy
Handles data imbalance using GAN
Lower confidence in low-res UAV
images
Vardhan et al.[18]
2023
Drone images
CNN
classification
CNN
High classification accuracy
Real-time disease detection
using UAV
Sensitive to image quality
Reedha et al. [19]
2021
UAV
high-res images
Vision Transformer (ViT)
Transformer
~99.8%
accuracy
Captures global dependencies better than
CNN
Requires high computational power
-
-
Evolution of Methods
The development of crop disease detection techniques has progressed significantly over time, moving from traditional machine learning approaches to advanced deep learning and hybrid architectures. This evolution can be broadly categorized into three stages:
-
Early Stage: Machine Learning-Based Approaches
In the initial phase, researchers relied on classical machine learning algorithms such as Support Vector Machines (SVM) and
Random Forest (RF)for disease classification
-
Mid Stage: CNN-Based Deep Learning Approaches
With the advancement of deep learning, Convolutional Neural Networks (CNNs) became the dominant approach for plant disease detection.
-
CNNs automatically learn hierarchical features from raw images, eliminating the need for manual feature extraction.
-
Popular architectures include VGG, ResNet, DenseNet, EfficientNet, and MobileNet.
-
-
Recent Stage: Hybrid Models
Recent research has shifted toward hybrid architectures that combine CNNs with Transformers or State Space Models (SSMs) to overcome the limitations of individual models.
-
CNNs are used for local feature extraction, while Transformers or SSMs capture global contextual relationships.
-
These models provide a more comprehensive understanding of disease patterns across leaf surfaces.
-
-
-
-
Performance Trend
Convolutional Neural Network (CNN) models have demonstrated high accuracy in crop disease detection due to their strong capability in extracting spatial and hierarchical features from images. However, their limitation in capturing long-range dependencies has led to the development of hybrid models that combine CNNs with advanced architectures such as Transformers or State Space Models. These hybrid models provide better generalization by integrating local feature extraction with global contextual understanding, making them more robust in complex and real-world agricultural environments. On the other hand, transformer-based models achieve the highest accuracy by effectively modelling global relationships within image data through self-attention mechanisms. Despite their superior performance, these models are computationally expensive and require large datasets and high processing power, which limits their practical deployment in resource-constrained environments.
-
Research Gaps Identified
Despite significant advancements in deep learning and precision agriculture, several critical challenges remain in crop disease detection systems. One major limitation is the lack of real-time integration between UAVs and edge AI systems, which restricts immediate decision-making in field conditions. Additionally, the availability of diverse and large-scale multi-crop datasets is limited, affecting the generalization capability of models across different plant species and environmental conditions. Advanced deep learning models, particularly hybrid and transformer-based architectures, also suffer from high computational costs, making them unsuitable for deployment on resource-constrained devices such as drones and mobile platforms. Furthermore, most deep learning models operate as black-box systems, leading to poor explainability and reduced trust among end-users, especially farmers who require interpretable insights for practical decision-making.
-
Research Gap and Motivation
Although prior work has contributed significantly to crop disease detection, several issues remain unresolved:
-
Limited capability for real-time processing
-
Lack of integration between image data and environmental factors
-
High computational requirements for advanced models
-
Insufficient focus on energy-efficient UAV operations
-
Minimal use of explainable AI techniques
This research aims to overcome these challenges by proposing a hybrid and integrated solution.
-
-
-
PROPOSED SYSTEM ARCHITECTURE
The proposed system consists of multiple interconnected components, including a UAV equipped with imaging sensors for capturing crop images, IoT devices for monitoring environmental parameters, and an edge/cloud computing infrastructure for data processing. A hybrid deep learning model is employed for accurate disease detection, while a decision support module provides actionable insights to assist farmers in timely and effective crop management. The system workflow includes data acquisition, pre-processing, feature extraction, data fusion, disease detection, and result visualization.
-
METHODOLOGY
-
Proposed System Overview
The proposed system integrates UAV-based remote sensing, IoT-enabled environmental monitoring, and hybrid deep learning models to detect crop diseases in real time. The workflow consists of data acquisition, pre-processing, feature extraction, multimodal data fusion, model training, and decision support. The system architecture is presented in fig 1.
Figure 1: Image Proposed system architecture for UAV based Crop Detection
-
Data Acquisition Layer
The first stage involves collecting high-resolution images of crop fields using an Unmanned Aerial Vehicle (UAV) equipped with advanced imaging sensors. The UAV enables arge-area coverage and captures detailed visual information of plant leaves. Simultaneously, IoT devices deployed in the field collect environmental parameters such as temperature, humidity, and soil moisture, which are essential for understanding disease patterns.
-
Data Transmission
The collected UAV and IoT data are transmitted to edge/cloud servers through:
-
LoRa/LoRaWAN (sensor data)
-
4G/5G or Wi-Fi (UAV imagery)
-
-
Data Pre-processing
The captured images are pre-processed to improve quality and consistency. This includes resizing images to a standard resolution, normalization to scale pixel values, and noise reduction to eliminate distortions. Data augmentation techniques such as rotation, flipping, and scaling are applied to increase dataset diversity and improve model robustness.
-
Image Pre-processing
The UAV images undergo the following steps:
-
Noise removal (Gaussian filtering)
-
Radiometric correction
-
Image stitching (orthomosaic generation)
-
Contrast enhancement
-
-
Data Cleaning
IoT sensor data is filtered to remove missing or inconsistent values using interpolation and normalization techniques.
-
-
Feature Extraction
A Convolutional Neural Network (CNN) is employed to extract local spatial features from the pre-processed images. CNN layers capture important patterns such as texture, colour variations, and disease spots, which are crucial for accurate classification.
-
Global Context Modelling
To enhance feature representation, a Transformer-based module is integrated with the CNN. The transformer captures long-range dependencies and global contextual relationships within the image, enabling better understanding of complex disease patterns distributed across leaf surfaces.
-
Data Fusion
The system incorporates a data fusion mechanism that combines image-based features with environmental data collected from IoT sensors. This multimodal approach improves prediction accuracy by considering both visual symptoms and environmental conditions influencing disease occurrence.
-
Disease Detection
The fused features are passed through fully connected layers for classification. The model predicts the type of crop disease or identifies healthy plants using a softmax activation function. The classification process is optimized using appropriate loss functions and training strategies.
-
Decision Support System
The final stage involves generating actionable insights based on model predictions. The decision support module provides real-time alerts and recommendations to farmers through mobile or web-based interfaces, enabling timely intervention and effective crop management.
-
Performance Evaluation
The model performance is evaluated using:
-
Accuracy
-
Precision
-
Recall
-
F1-score
-
Confusion matrix
-
-
Work flow Summary
Workflow summary of the proposed system for disease detection using UAV imagery and IoT data integration is presented in fig.
2. The process illustrates sequential stages from data acquisition and preprocessing to feature extraction, data fusion, classification, and decision support for generating alerts and recommendations.
Figure 2: Workflow of the proposed UAV and IoT-based disease detection system
-
-
MATHEMATICAL MODEL
-
Image Representation
Let the input UAV image be represented as:
XRH×W×C (1)
Where, H, W, and C denote height, width, and channels of the image.
-
CNN-Based Feature Extraction
The convolution operation extracts local features:
F=XK+b (2)
where: X = input image
K = convolution kernel b = bias
F = feature map Activation function (ReLU):
-
Transformer-Based Global Learning
f(x)=max(0,x) (3)
Self-attention mechanism is used to capture global dependencies:
Attention (Q, K, V ) = softmax(QKT / dk )V (4)
-
Data Fusion
Let: F i = image features F e = environmental (IoT) features Fusion is defined as:
Ffusion=[FiFe] (5) where denotes concatenation.
-
Classification Layer
Final prediction is obtained using Softmax:
P(yi) =
=1
(6)
where:
zi = output of final layer
P(yi) = probability of class i
-
Loss Function
The model is optimized using cross-entropy loss:
= =1 () (7)
-
Final Model Representation
The complete hybrid model can be expressed as:
Y=f classifier (f transformer(f CNN(X)) Fe) (8)
-
-
DATA SOURCE
The proposed system utilizes a multimodal dataset consisting of UAV-acquired imagery and IoT sensor data. The dataset is constructed using both publicly available benchmark datasets and synthetic/field-collected data to ensure robustness and generalization. Data Sources is presented in the table 2.
Table 2: Data Sources for Image-Based and IoT-Driven Crop Disease Detection
Data Source
Type of Data
Description
Purpose
UAV Images
Aerial Images
High-resolution images captured using drones over crop fields
Real-time disease
detection in field conditions
Plant Village Dataset
Image Dataset
Public dataset with labelled
images of healthy and diseased leaves
Model training and benchmarking
Real-Field Dataset
Image Dataset
Images collected under natural conditions (varying light,
background, noise)
Improve model generalization
IoT Sensors
Environmental Data
Temperature, humidity, soil moisture readings
Context-aware disease prediction
-
Dataset Size and Split
The table 3 presents the size and composition of the datasets used in the study, including UAV images, IoT records, and the combined labeled dataset. It provides an overview of data availability and distribution, which is essential for model training and evaluation.
Table 3: Dataset Size and Distribution of UAV Images, IoT Records, and Combined Samples
Dataset Type
Size
UAV Images
10,000 20,000 samples
IoT Records
50,000+ entries
Combined Dataset
~15,000 labelled samples
-
Data Split
The dataset is divided into three subsets to ensure effective model development and evaluation. A total of 70% of the data is allocated for training, allowing the model to learn underlying patterns and features. The remaining data is split equally, with 15% used for validation to fine-tune model parameters and prevent overfitting, and 15% reserved for testing to assess the modes performance on unseen data.
-
Data Pre-processing
Data pre-processing is a crucial step to ensure the quality and consistency of the input data for the proposed system. Initially, all UAV-acquired images are resized to a standard resolution and normalized to maintain uniform pixel intensity distribution. To enhance model robustness and prevent overfitting, data augmentation techniques such as rotation, flipping, and scaling are applied. In addition to image processing, the environmental data collected from IoT sensors undergo noise filtering to remove irregular or corrupted readings. Furthermore, missing values in sensor data are handled using interpolation techniques, ensuring continuity and reliability of the dataset for accurate disease prediction.
-
-
SIMULATION ENVIRONMENT AND PARAMETERS
-
Simulation Overview
The proposed system is simulated to evaluate the performance of UAV-enabled crop disease detection using hybrid deep learning and IoT data fusion. The simulation includes UAV data acquisition, communication, AI model training, and performance evaluation.
-
Hardware Configuration
The experiments were conducted on a system equipped with an Intel Core i7 processor, 16 GB RAM, and an NVIDIA GPU (e.g., RTX 3060) to accelerate deep learning computations. The UAV system is assumed to be equipped with a high-resolution RGB camera for image acquisition, while IoT devices include sensors for temperature, humidity, and soil moisture monitoring.
-
Software Environment
The proposed model was implemented using Python with deep learning frameworks such as TensorFlow and Keras (or PyTorch). Image processing and data handling were performed using libraries including OpenCV, NumPy, and Pandas. The experiments were conducted in a Jupyter Notebook / Google Colab environment.
-
Model Configuration
The proposed CNNLSTM model is designed to effectively capture both spatial and temporal features for accurate classification. It incorporates convolutional layers for feature extraction, enabling the model to learn important patterns from the input data, followed by pooling layers for dimensionality reduction, which help in minimizing computational complexity while retaining essential information. To capture sequential dependencies and contextual relationships, LSTM layers are employed for sequence and context learning. Finally, fully connected layers are used for classification, producing the final output based on the learned features. The model was trained using the Adam optimizer with a learning rate of 0.001, and categorical cross-entropy was used as the loss function.
-
UAV Simulation Parameters
The table 4 summarizes the key UAV simulation parameters, including flight conditions, imaging specifications, and operational limits. It highlights typical ranges for altitude, speed, coverage area, camera resolution, image overlap, battery capacity, and flight duration, providing an overview of the systems performance and operational setup.
Table 4: UAV Simulation Parameters for Data Acquisition and Flight Operation
Parameter
Value
Flight Altitude
50120 meters
Flight Speed
510 m/s
Coverage Area
100 × 100 m² to 500 × 500 m²
Camera Resolution
12 MP / 20 MP
Image Overlap
70% (front), 60% (side)
Battery Capacity
40006000 mAh
Flight Time
2030 minutes
Figure-3: Aerial View
-
IoT Sensor Parameters
The table 5 outlines the key IoT sensor parameters used for environmental monitoring in the system. It specifies the typical operating ranges for temperature, humidity, soil moisture, and soil pH, along with the sampling interval of 510 minutes, ensuring timely and accurate data collection for context-aware analysis and disease prediction.
Table 5: IoT Sensor Parameters and Measurement Ranges
Parameter
Range
Temperature
20°C 40°C
Humidity
40% 90%
Soil Moisture
10% 60%
Soil pH
5.5 8.0
Sampling Interval
510 minutes
-
Deep Learning Model Parameters
The table 6 presents the key parameters used in configuring the deep learning models for the study. It includes details such as the model type (CNN combined with LSTM or Transformer), input image sizes, batch size, number of training epochs, and learning rate. Additionally, it specifies the use of the Adam optimizer, categorical cross-entropy as the loss function, and ReLU and Softmax as activation functions, outlining the overall training setup and model architecture choices.
Table 6: Deep Learning Model Configuration and Training Parameters
Parameter
Value
Model Type
CNN + LSTM / Transformer
Input Image Size
224 × 224 / 256 × 256
Batch Size
16 / 32
Epochs
50100
Learning Rate
0.001
Optimizer
Adam
Loss Function
Categorical Cross-Entropy
Activation Function
ReLU, Softmax
-
-
RESULTS AND DISCUSSION
The table presents a comparative evaluation of different models based on key performance metrics, including accuracy, precision, recall, and F1-score. It shows that while individual models like CNN and Transformer achieve strong results, the hybrid approaches perform better, with the proposed CNNLSTM model delivering the highest performance across all metrics. This demonstrates the effectiveness of combining spatial and sequential learning techniques for improved disease detection accuracy.
Table 7: Model Comparison
Model
Accuracy (%)
Precision (%)
Recall (%)
F1-Score (%)
CNN
91.8
90.5
89.7
90.1
Transformer
93.2
92.4
91.8
92.1
Hybrid (CNN + Transformer)
96.1
95.3
94.8
95.0
CNNLSTM (Proposed)
97.2
96.6
96.0
96.3
-
Receiver Operating Characteristic (ROC) Curve
The ROC curve shown in fig. 3 illustrates the classification performance of the proposed UAV-enabled CNNLSTM model that integrates multimodal data through image and IoT sensor fusion. In this context, the curve plots the true positive rate (TPR), or
sensitivity, against the false positive rate (FPR) across different decision thresholds. A curve that closely approaches the top-left corner indicates that the model achieves a high TPR while maintaining a very low FPR, meaning it can correctly identify diseased crops with minimal false alarms.
Figure. 4: ROC Curve
This is particularly important in UAV-based agricultural monitoring, where unnecessary interventions due to false positives can increase operatinal costs, while missed detections (false negatives) can lead to significant crop damage. The Area under the Curve (AUC) value of approximately 0.98 signifies excellent discriminative ability, implying that the model can almost perfectly distinguish between healthy and diseased samples. Such a high AUC reflects not only strong predictive accuracy but also robustness across varying thresholds, making the model reliable under different field conditions. Overall, this performance highlights the effectiveness of combining spatial feature extraction (CNN), temporal/contextual learning (LSTM), and multimodal data fusion, positioning the system as a highly dependable solution for real-time, large-scale crop disease detection using UAV platforms.
-
Discussion
-
Model Performance
The proposed hybrid CNNLSTM model achieves high accuracy (97.2%), outperforming traditional machine learning models and standalone CNN architectures. While transformer-based models slightly outperform in accuracy, they require significantly higher computational resources.
-
Effect of Data Fusion
Integrating IoT sensor data with UAV imagery enhances model robustness, especially under varying environmental conditions such as humidity and temperature.
-
Real-Time Capability
The optimized lightweight model enables near real-time disease detection on edge devices, making it suitable for UAV deployment.
-
Energy Efficiency
The integration of optimization techniques (e.g., PSO) reduces UAV energy consumption, extending operational lifetime and coverage.
-
Limitations
-
Slight confusion between visually similar diseases
-
Performance depends on image quality and resolution
-
Transformer models still outperform in highly complex scenarios
-
-
-
Comparison with Existing Work
The table provides a comparative overview of existing studies alongside the proposed system, highlighting differences in methods, dataset types, accuracy, and key limitations. It shows that while previous approaches achieve high accuracy, they often face challenges such as high computational cost, lack of real-time capability, limited crop diversity, or absence of multimodal data integration. In contrast, the proposed system combines CNN, LSTM, and IoT-based data fusion to achieve competitive accuracy while addressing many of these limitations, offering a more balanced and practical solution.
Table 8: Comparison of Proposed System with Existing UAV-Based Crop Disease Detection Approaches
Study
Method
Dataset Type
Accuracy
Key Limitation
Chin et al., 2023[20]
UAV + ML/DL review
UAV field images
(survey)
No real-time system
Shahi et al., 2023[10]
CNN, DL models
UAV + remote sensing
>90%
Requires large datasets
Bouguettaya et al. 2023, [16]
CNN + multispectral
UAV vineyard images
95.92%
Limited crop diversity
Prasad et al., 2021, [17]
GAN + ML
UAV + synthetic data
96.3%
Lower confidence in low-res UAV images
Radoaj et al, 2025[21]
CNN / Hybrid / Transformer
UAV datasets
9199%
High computation cost
Dutta et al.2025[22]
ResNet + YOLO
UAV real-time system
98.9%
Limited multimodal fusion
Proposed System[2026]
CNN + LSTM + IoT
Fusion
UAV + IoT
multimodal
97.2%
Slight complexity
-
-
CONCLUSION AND FUTURE RESEARCH DIRECTIONS
This study presents a robust UAV-based crop disease detection system that integrates AI and IoT technologies. By combining spatial and temporal data through a hybrid CNNLSTM model, the system achieves high accuracy and reliability. The proposed framework addresses key limitations of existing approaches and offers a practical solution for precision agriculture.
Future research in UAV-enabled crop disease detection can focus on several promising directions to enhance system capability and scalability. One important area is the integration of satellite imagery with UAV data to enable multi-scale monitoring of agricultural fields. Additionally, the development of lightweight transformer models can address computational challenges and facilitate deployment on edge devices and UAV platforms. Another key direction is the design of autonomous UAV-based treatment systems that not only detect diseases but also perform targeted interventions such as pesticide spraying. Furthermore, expanding the system for deployment in large-scale agricultural environments will improve its practical applicability, ensuring robust performance across diverse crop types and field conditions.
-
REFERENCES
-
Borah, A., Sahu, S., Srivastava, R. P., Singh, M., & Tyagi, D. B. (2024). Exploring the Economic Challenges Threatening Global Agriculture and Food Security. Ecology, Environment & Conservation (0971765X), 30.
-
Gai, Y., & Wang, H. (2024). Plant disease: A growing threat to global food security. Agronomy, 14(8), 1615.
-
Mohanty, S. K., Mahalwar, A., Mallick, C., & Dora, S. S. (2026). HRT-AD Net: Hybrid Residual Transformer Attention-based Deep Network for Rice Disease Diagnosis. AIJFR-Advanced International Journal for Research, 7(2).
-
Narayanappa, G. B. C., Abbas, S. H., Annamalai, L., Meenakshi, R., Singh, M., Yadav, T. N., & Kumar, A. R. (2024). Revolutionizing UAV: experimental evaluation of IoT-enabled unmanned aerial vehicle-based agricultural field monitoring using remote sensing strategy. Remote Sensing in Earth Systems Sciences, 7(4), 411-425.
-
Mallick, C., Patra, A., Dash, S., Mishra, P. K., & Paikaray, B. K. (2026). Leveraging deep learning for strategic decision-making in sustainable agriculture: enhancing plant disease detection for optimised supply chain management and ecosystem health. International Journal of Applied Management Science, 18(1), 90-110.
-
Sharma, K., & Shivandu, S. K. (2024). Integrating artificial intelligence and Internet of Things (IoT) for enhanced crop monitoring and management in precision agriculture. Sensors International, 5, 100292.
-
Maes, W. H., & Steppe, K. (2019). Perspectives for remote sensing with unmanned aerial vehicles in precision agriculture. Trends in plant science, 24(2), 152-164.
-
Mallick, C., Mishra, S., & Senapati, M. R. (2023). A cooperative deep learning model for fake news detection in online social networks. Journal of Ambient Intelligence and Humanized Computing, 14(4), 4451-4460.
-
Bouguettaya, A., Zarzour, H., Kechida, A., & Taberkit, A. M. (2022). Deep learning techniques to classify agricultural crops through UAV imagery: A review. Neural computing and applications, 34(12), 9511-9536.
-
Shahi, T. B., Xu, C. Y., Neupane, A., & Guo, W. (2023). Recent advances in crop disease detection using UAV and deep learning techniques. Remote Sensing, 15(9), 2450.
-
Zhu, H., Lin, C., Liu, G., Wang, D., Qin, S., Li, A., … & He, Y. (2024). Intelligent agriculture: Deep learning in UAV-based remote sensing imagery for crop diseases and pests detection. Frontiers in Plant Science, 15, 1435016.
-
García-Vera, Y. E., Polochè-Arango, A., Mendivelso-Fajardo, C. A., & Gutiérrez-Bernal, F. J. (2024). Hyperspectral image analysis and machine learning techniques for crop disease detection and identification: A review. Sustainability, 16(14), 6064.S. Shahi et al., Deep Learning for Crop Disease Detection, Remote Sensing, 2023.
-
Mansoor, S., Iqbal, S., Popescu, S. M., Kim, S. L., Chung, Y. S., & Baek, J. H. (2025). Integration of smart sensors and IOT in precision agriculture: trends, challenges and future prospectives. Frontiers in Plant Science, 16, 1587869.
-
Surusomayajula, M., Dhuli, V. S., Dodda, V. C., Chalasani, K., Nimmagadda, S. V., & Jammula, T. M. (2023, December). Wheat plant disease detection using cnn on real-time uav images. In 2023 IEEE 15th International Conference on Computational Intelligence and Communication Networks (CICN) (pp. 603-607). IEEE.
-
Zhang, T., Wang, D., & Chen, W. (2025). Multiscale CNN-state space model with feature fusion for crop disease detection from UAV imagery. Frontiers in Plant Science, 16, 1733727.
-
Bouguettaya, A., Zarzour, H., Kechida, A., & Taberkit, A. M. (2022). Deep learning techniques to classify agricultural crops through UAV imagery: A review. Neural computing and applications, 34(12), 9511-9536.
-
Prasad, A., Mehta, N., Horak, M., & Bae, W. D. (2022). A two-step machine learning approach for crop disease detection using GAN and UAV technology. Remote Sensing, 14(19), 4765.
-
Vardhan, J., & Swetha, K. S. (2023). Detection of healthy and diseased crops in drone captured images using Deep Learning. arXiv preprint arXiv:2305.13490.
-
Reedha, R., Dericquebourg, E., Canals, R., & Hafiane, A. (2021). Vision transformers for weeds and crops classification of high resolution UAV images. arXiv preprint arXiv:2109.02716.
-
Chin, R., Catal, C., & Kassahun, A. (2023). Plant disease detection using drones in precision agriculture. Precision Agriculture, 24(5), 1663-1682.
-
Radoaj, D., Radoaj, P., Plaak, I., & Jurii, M. (2025). Evolution of Deep Learning Approaches in UAV-Based Crop Leaf Disease Detection: A Web of Science Review. Applied Sciences, 15(19), 10778.
-
Dutta, S., Banerjee, S., Mahata, S., Sen, A., & Datta, S. (2025). A Low-Cost UAV Deep Learning Pipeline for Integrated Apple Disease Diagnosis, Freshness Assessment, and Fruit Detection. arXiv preprint arXiv:2512.22990.
