DOI : https://doi.org/10.5281/zenodo.19662814
- Open Access

- Authors : Vino K, Mrs. J. Arthipriyadharshini, Sureshkumar A K, Lokesh D, Praveen G
- Paper ID : IJERTV15IS041387
- Volume & Issue : Volume 15, Issue 04 , April – 2026
- Published (First Online): 20-04-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
AgriVision AI: A Hybrid Deep Learning and Climate-Aware Predictive System for Precision Plant Pathology
Vino K
Department of Information Technology Knowledge Institute of Technology Salem, Tamil Nadu, India
Mrs. J. Arthipriyadharshini
Department of Information Technology Knowledge Institute of Technology Salem, Tamil Nadu, India
Sureshkumar A K
Department of Information Technology Knowledge Institute of Technology Salem, Tamil Nadu, India
Lokesh D
Department of Information Technology Knowledge Institute of Technology Salem, Tamil Nadu, India
Praveen G
Department of Information Technology Knowledge Institute of Technology Salem, Tamil Nadu, India
Abstract – Agriculture is intrinsically vulnerable to phy- topathological threats and stochastic climate variability, which
collectively precipitate signicant reductions in global crop yield. Traditional reactive disease management remains inefcient and insufciently scalable for modern precision agriculture, necessi- tating automated and proactive intervention frameworks. This paper presents AgriVision AI, a novel integrated intelligent sys- tem that synergizes deep Convolutional Neural Networks (CNNs) for high-precision botanical disease diagnosis with predictive environmental analytics for real-time climate risk assessment. The proposed architecture employs an optimized deep learning model trained on a comprehensive dataset of foliar images, incorporating advanced preprocessing pipelines including Con- trast Limited Adaptive Histogram Equalization (CLAHE) and spatial augmentation to ensure robustness against eld-condition variances. Concurrently, the Vivasayam AI module ingests real- time meteorological dataspecically temperature, humidity, and precipitationthrough RESTful Weather APIs, applying heuristic thresholding to forecast micro-climate-driven disease susceptibility via a dened Climate Risk Index (CRI). Experi- mental evaluations demonstrate that the deep learning diagnostic module achieves an overall classication accuracy of 96.4%, with a macro F1-score of 94.98%. By augmenting visual diagnostics with anticipatory weather-risk proling, AgriVision AI facilitates a paradigm shift from reactive treatment to proactive crop management, empirically validating improved decision-support mechanisms for sustainable precision agriculture.
Index Terms – Precision Agriculture, Deep Learning, Convo- lutional Neural Networks, Plant Disease Detection, Climate In-
formatics, Predictive Analytics, TensorFlow, CLAHE, Vivasayam AI.
-
INTRODUCTION
Global food security is fundamentally contingent on sus- tainable agricultural practices; however, crop productivity is persistently threatened by infectious plant diseases and unpre- dictable micro-climatic uctuations. The Food and Agriculture Organization (FAO) estimates that up to 40% of global crop yields are lost annually to pests and diseases [1]. In developing
nations such as India, where a substantial proportion of the rural population depends on crop cultivation for livelihood, the socioeconomic ramications of undetected phytopathological outbreaks are severe.
Traditional methods for early disease detection primarily rely on visual inspection by agronomic expertsa process that is inherently labor-intensive, subjective, scalable only at immense cost, and prone to signicant diagnostic latency. A compounding challenge is that foliar symptoms frequently remain undetectable or are incorrectly interpreted during the critical early stages of infection, permitting accelerated pathogenic proliferation.
Recent paradigms in Precision Agriculture have increasingly adopted Articial Intelligence (AI) and Computer Vision to au- tomate phytosanitary assessments. Deep CNNs have exhibited state-of-the-art performance in classifying foliar lesions from RGB imagery [2]. However, a critical research gap persists: the vast majority of existing AI diagnostic systems operate in an environmental vacuum, providing reactive classication once symptomatic phenotypes have manifested while failing to account for the meteorological precursors that catalyze pathogenesis. For instance, the sporulation of blight-inducing fungi is highly correlated with specic temperature and relative humidity thresholds. An isolated visual diagnostic system cannot preemptively alert a farmer to these nascent pathogenic conditions.
To bridge this operational gap, this paper introduces AgriV- ision AI, a hybrid decision-support system. AgriVision AI transcends conventional image classication by fusing deep learning-based visual diagnostics with real-time environmental APIs. The dual-engine architecture not only diagnoses present infections through leaf imagery analysis but also calculates proactive vulnerability indices based on localized climate data, enabling preemptive fungicidal or structural interventions.
The remainder of this paper is structured as follows: Section
II reviews the relevant literature. Section III delineates the research novelty and primary contributions. Section IV details the proposed methodology. Section V describes the system architecture and implementation. Section VI presents and discusses the experimental results. Section VII addresses the ablation study. Section VIII identies limitations. Section IX outlines future research directions. Section X concludes the paper.
-
LITERATURE REVIEW
The convergence of digital image processing, deep learning, and agricultural sciences has catalyzed extensive research aimed at mitigating crop losses through automated phytosan- itary monitoring. Mohanty et al. [2] provided a foundational contribution by demonstrating that deep CNNs applied to a large public dataset of foliar images could surpass human expert performance in recognizing distinct plant diseases under controlled settings. Building upon this, Ferentinos [3] de- ployed advanced deep learning models across a wider array of crop species, conrming the extensive scalability of AI-driven diagnostics for multi-class phytopathological classication.
However, Barbedo [4] critically highlighted the limitations of such models when confronted with real-eld conditions, where complex backgrounds, variable illumination, and over- lapping symptoms severely degrade algorithmic condence. These ndings necessitated the development of advanced im- age preprocessing techniques for noise reduction and accurate region-of-interest segmentation prior to neural network inges- tion. Sladojevic et al. [5] demonstrated the practical viability of CNN-based leaf image classication in non-laboratory envi- ronments using mobile platforms, emphasizing the importance of robust preprocessing for eld deployment.
Concurrently, predictive modeling of environmental vari- ables has gained signicant traction within agronomic re- search. Dahikar and Rode [6] demonstrated the applicability of articial neural networks in forecasting agricultural yield variations based on meteorological inputs. The proliferation of global weather APIs has further enabled researchers to construct dynamic agronomic models. Rehak et al. [7] de- tailed how the integration of real-time Internet-of-Things (IoT) climate data can optimize pesticide application schedules, thereby preventing chemical runoff during unanticipated rain- fall events.
Kamilaris and Prenafeta-Boldu [8] conducted a compre- hensive survey of deep learning applications in agriculture, identifying the unication of environmental intelligence with computer vision diagnostics as a principal area requiring further investigation. Singhand Misra [9] explored image segmentation combined with soft computing for leaf disease detection, noting persistent challenges with multi-disease con- current infections. Pantazi et al. [10] examined the use of hyperspectral imaging and active learning criteria for plant species identication, indicating that spectral features provide complementary information beyond conventional RGB chan- nels.
Despite substantial individual advancements in image-based disease identication and climate-driven predictive modeling, a tangible gap persists in unifying these two paradigms into a single, cohesive diagnostic and advisory platform accessible to smallholder farmers. AgriVision AI directly addresses this void by tightly coupling deep learning visual diagnostics with the predictive climatological oversight of the Vivasayam AI module.
-
RESEARCH NOVELTY AND CONTRIBUTIONS
The primary novelty of AgriVision AI lies in its transition from a reactive, uni-modal diagnostic tool to a proactive, multi-modal agricultural management ecosystem. By inter- twining computer vision with meteorological analytics, the system models plant health as a joint function of both bio- logical symptoms and environmental stressors.
The core contributions of this research are as follows:
-
High-Fidelity CNN Architecture: Development and rig- orous evaluation of an optimized deep learning pipeline incorporating CLAHE contrast enhancement and spatial augmentation to mitigate the generalization gap between laboratory-acquired datasets and real-world eld imagery.
-
Climate Risk Integration Engine (Vivasayam AI): Implementation of a deterministic algorithmic module that continuously monitors API-fetched weather teleme- try (humidity, temperature, localized rainfall) to generate spatial disease susceptibility predictions using a formally dened Climate Risk Index.
-
Dual-Axis Diagnostic Paradigm: Formulation of a syn- thesized framework that outputs both an instantaneous disease classication (reactive axis) and a forecasted environmental threat level (proactive axis), with cross- referencing through a Diagnostic Synthesizer to reduce false negatives.
-
Actionable Decision Support: Architecture of a user- centric deployment model designed to translate complex AI inferences into accessible, actionable agronomic ad- visories for end-users with minimal technical literacy.
-
-
METHODOLOGY
The proposed methodology is bifurcated into two parallel processing streams: the Visual Diagnostic Pipeline and the Climate Predictive Module.
-
Dataset Description
The deep learning model was trained on a comprehensive foliar disease dataset compiled from the PlantVillage bench- mark [2] supplemented with eld-acquired imagery to improve domain generalization. The consolidated dataset comprises 54,306 images spanning 38 disease and healthy-plant classes across 14 crop species, including tomato, potato, maize, and apple. Images were captured under variable illumination con- ditions at a resolution of 256 × 256 pixels. The dataset was partitioned into 80% training (43,445 images), 10% validation
(5,430 images), and 10% testing (5,431 images) subsets with stratied class distribution to prevent representational bias.
-
Visual Diagnostic Pipeline and Preprocessing
The efcacy of the deep learning model is highly contingent on input data quality. Raw input imagery undergoes a system- atic preprocessing pipeline to normalize illumination variances and remove background noise prior to network ingestion.
-
Image Resizing: All input images are resized to a standard tensor dimension of 224 × 224 × 3 to conform to the networks input specication while preserving sufcient spatial resolution for lesion feature extraction.
-
Contrast Enhancement via CLAHE: Foliar lesions often exhibit low local contrast against the surrounding healthy tis- sue, particularly under diffuse natural illumination. We employ Contrast Limited Adaptive Histogram Equalization (CLAHE)
f (x) = max(0, x) (4)
Spatial dimension reduction and translation invariance are achieved through 2 × 2 MaxPooling operations applied after each convolutional block. The extracted feature vectors are attened and passed through two Fully Connected (FC) layers of dimensions 512 and 256 respectively. Dropout regulariza- tion with a retention probability p = 0.5 is integrated before the nal classication head to prevent neuron co-adaptation.
The output probability distribution across N disease classes is computed using the Softmax activation function:
“£
ezj
to amplify the local contrast of lesion regions. Unlike global
P (y = j | z) = N
k=1
ezk
, j {1,…,N } (5)
histogram equalization, CLAHE operates on adaptive tile grids and applies a contrast limiting parameter to prevent noise over-amplication:
L 1
where z is the logit output vector of the nal dense layer. The network weights are optimized by minimizing the
Categorical Cross-Entropy Loss L using the Adam optimizer with an initial learning rate = 1 × 104:
HCLAHE(r) = min (, H(r)) · “£L1 min (, H(k)) (1)
L yi,j log(pi,j) (6)
where H(r) is the histogram value at intensity level r, L is the total number of intensity levels, and is the clip limit threshold.
-
Normalization: Input tensors are normalized to zero mean and unit variance to stabilize gradient descent during training:
i=1 j=1
where M is the number of training samples, yi,j {0, 1} is the ground-truth one-hot label for sample i and class j, and pi,j is the predicted probability. The model was trained for 50 epochs with a batch size of 32 on an NVIDIA GPU-accelerated environment using TensorFlow 2.x.
D. Climate Risk Prediction Engine (Vivasayam AI)
In parallel to the visual diagnostic pipeline, the Vivasayam
where X is the raw pixel intensity matrix, is the per- channel mean, and is the per-channel standard deviation computed over the training corpus.
-
Data Augmentation: To inhibit overtting and improve model generalization to eld-condition variances inherent in smartphone-acquired imagery, the training set is aug- mented using afne transformations: random rotations [30, 30], random zoom factors z [0.8, 1.2], and hori- zontal/vertical ips. Each augmentation operation is applied stochastically with a probability of 0.5 per sample per epoch.
-
-
Deep Convolutional Neural Network Architecture
The core classication is executed by a custom-tuned deep CNN. Let the preprocessed input image be represented as a tensor X RH×W ×C, where H = W = 224 and C = 3. The network extracts hierarchical feature maps through a sequence of convolutional layers dened as:
AI module polls geographic-specic meteorological data via RESTful APIs (specically, OpenWeatherMap). At each polling interval t, the environmental state is represented as a vector:
Et = [Tt, Ht, Rt] (7)
where Tt denotes ambient temperature (C), Ht denotes relative humidity (%), and Rt denotes precipitation volume (mm/hr) at time t.
The Climate Risk Index (CRI) is calculated using a weighted heuristic algorithm. Domain-expert-calibrated weights wi are assigned to each environmental parameter based on the specic crop prole and pathogen susceptibil-
ity proles (e.g., Phytophthora infestans thrives under T
[15C, 22C] and H > 90%). A discrete risk categorization is then produced:Moderate Risk, if
where denotes the 2D convolution operation, Kk,j are the learnable lter kernels at layer l for output feature map k from input feature map j, bk is the bias vector, and f (·) represents the Rectied Linear Unit (ReLU) non-linearity:
Low Risk, otherwise (8)
where H and M denote the high-risk and moderate- risk threshold parameters, respectively, calibrated empirically against historical outbreak records.
E. Diagnostic Synthesizer
The Diagnostic Synthesizer cross-references the predicted disease class from the CNN with the current CRI to mitigate false negatives arising from early-stage asymptomatic infec- tions. If the image classication condence is marginal (below a dened threshold = 0.70) but the CRI conrms highly conducive disease conditions, the system ags the instance for manual agronomic review, thereby reducing the risk of missed detections.
-
-
SYSTEM ARCHITECTURE AND IMPLEMENTATION
-
Modular Architecture
The AgriVision AI architecture is organized into ve inter- connected functional layers, as depicted in Fig. 1.
Fig. 1. System Architecture of AgriVision AI integrating Deep Learning visual diagnostics and Vivasayam AI climate informatics.
-
Data Ingestion Layer: Interfaces with end-user mobile and web devices for leaf image capture via Camera APIs and invokes third-party Weather APIs (OpenWeath- erMap) using GPS-derived geographical coordinates.
-
Core AI Engine (Inference Layer): Hosts the serialized TensorFlow neural network model and executes the full preprocessing pipeline (CLAHE, normalization, augmen-
Fig. 2. Sequential preprocessing and CNN inference pipeline: Raw Image
Resize (224×224) CLAHE Normalization Augmentation
Conv2D+ReLU MaxPooling Flatten Dense+Dropout Softmax
Class Prediction.
C. Implementation Details
The system was implemented over a 12-week development cycle. Weeks 12 were dedicated to literature synthesis and system scoping. Dataset curation and augmentation pipelines were established during Weeks 34. The CNN model training and hyperparameter optimization were conducted in Weeks 56 using TensorFlow 2.x on an NVIDIA GPU environment. System integrationincluding the Vivasayam AI API connec- tions and User Interface Layer development using HTML5, CSS3, and JavaScriptwas completed in Weeks 710. Final optimization, latency proling, and documentation were com- pleted in Weeks 1112.
-
-
-
EXPERIMENTAL RESULTS AND DISCUSSION
-
Model Evaluation Metrics
The CNN model performance was quantied using standard statistical metrics derived from the confusion matrix elements: True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN). The macro-averaged metrics are dened as:
tation) prior to inference.
-
Climate Prediction Module (Vivasayam AI): Man- ages asynchronous connections to weather infrastructure, parses JSON meteorological feeds, and computes the CRI
Accuracy = TP + TN
TP + TN + FP + FN
N
Precision = 1 L T Pj
(9)
(10)
according to Eq. (8).
-
Diagnostic Synthesizer: Performs cross-modal valida- tion between the CNNs disease classication output and
N
j=1
N
T Pj + F Pj
the CRI, escalating low-condence instances for review.
-
Application and Output Layer: A responsive web dash- board delivers disease classication results, condence scores, severity mappings, and synthesized actionable agronomic advisories to end-users.
Recall = 1 L T Pj
N T Pj + FNj
j=1
F1-Score = 2 · Precision · Recall
Precision + Recall
(11)
(12)
-
-
Processing Pipeline
Fig. 2 illustrates the sequential preprocessing and inference pipeline applied to each input image.
Table I summarizes the macro-averaged classication per- formance of the AgriVision AI CNN module on the held-out test set.
TABLE I
Classification Performance Metrics of AgriVision AI
TABLE III
Ablation Study: Component Contribution Analysis
Evaluation Metric
Achieved Score (%)
System Conguration
Accuracy (%)
Overall Accuracy
96.40
CNN only (no preprocessing)
88.31
Macro Precision
95.12
CNN + Normalization only
91.74
Macro Recall (Sensitivity)
94.85
CNN + Normalization + Augmentation
93.87
Macro F1-Score
94.98
CNN + CLAHE + Normalization
94.52
CNN + CLAHE + Normalization + Augmentation (Full pipeline)
96.40
Full pipeline + Vivasayam AI (Integrated)
96.40
-
Comparison with Existing Methods
Table II presents a comparative evaluation of AgriVision AI against representative prior works on plant disease classica- tion.
TABLE II
Comparative Performance Against Prior Methods
Method
Accuracy (%)
Climate Integration
Mohanty et al. [2]
99.35
No
Ferentinos [3]
98.28
No
Sladojevic et al. [5]
91.40
No
Karthik et al. [11]
95.20
No
AgriVision
AI
96.40
Yes
(Proposed)
Evaluated on controlled laboratory datasets; generalization to eld conditions exhibits measurable degradation (Barbedo [4]).
-
Discussion
The CNN module achieves a robust overall accuracy of 96.4%. The high macro F1-score of 94.98% conrms the models resilience against imbalanced class distributions, en- suring that under-represented minority disease classes are iden- tied with proportional reliability. The model demonstrated particularly strong discrimination between visually similar classes, such as early blight (Alternaria solani) and late blight (Phytophthora infestans) on tomato and potato leaves, where condence scores consistently exceeded 90%.
The Vivasayam AI climate module demonstrated signi- cant practical utility in integrated testing scenarios. For in- stances where early-stage asymptomatic infection rendered the visual features insufcient for condent classication, the Climate Risk Engine successfully agged anomalous humidity-temperature combinations and recommended pre- ventative fungicidal application. This cross-modal validation mechanism through the Diagnostic Synthesizer effectively reduced the operational false-negative rate, providing a safety net unavailable to purely visual diagnostic systems.
-
-
ABLATION STUDY
To quantify the individual contribution of each architectural component, an ablation study was conducted by systematically disabling modules. Table III reports the test accuracy under ach conguration.
The ablation results conrm that each preprocessing stage contributes a statistically meaningful improvement in classi- cation accuracy. The CLAHE enhancement alone yielded a
2.78 percentage-point improvement over normalization-only
Vivasayam AI reduces false-negative rate; accuracy is reported on the visual classication subset only.
preprocessing (91.74% 94.52%), validating its utility in amplifying diagnostically relevant lesion contrast. The full preprocessing pipeline (CLAHE + Normalization + Augmen- tation) achieved the optimal accuracy of 96.40%, representing an 8.09 percentage-point improvement over the baseline un- processed conguration.
-
CHALLENGES
Constructing an AI-driven agricultural system for rural deployment environments introduced several computational and socioeconomic challenges.
Visual Data Integrity: In uncontrolled eld environments, ambient lighting variability, cast shadows, soil glare, and par- tial leaf occlusion adversely impact the image preprocessing algorithms, occasionally resulting in degraded classication condence. The CLAHE preprocessing mitigates, but does not fully eliminate, this sensitivity.
Network Connectivity: The Vivasayam AI modules de- pendence on RESTful weather APIs necessitates a stable in- ternet connection. In remote agrarian zones, bandwidth latency occasionally extends weather data retrieval response times beyond acceptable operational thresholds. Backend request throttling and local caching mechanisms were implemented to partially address API availability constraints.
Multi-Disease Concurrency: The current CNN architecture operates under the assumption that a single dominant disease is present per leaf image. Concurrent multi-pathogen infections on a single leaf violate this assumption and may degrade classication reliability, as the model outputs a single-class prediction per inference pass.
Dataset Domain Gap: The generalization gap between laboratory-curated dataset images and real-eld photographs remains a persistent limitation. Although augmentation strate- gies partially address this gap, plant phenotypic variability across geographic regions and growth stages introduces distri- bution shifts not fully represented in the training corpus.
-
FUTURE WORK
Future iterations of AgriVision AI will pursue the following research and engineering objectives:
-
Explainable AI (XAI): Integration of Gradient-weighted Class Activation Mapping (Grad-CAM) to generate spa- tial heatmaps, providing interpretable visual explanations
of the models diagnostic reasoning to agronomic end- users.
-
Edge Computing Deployment: Quantization and com- pilation of the TensorFlow CNN model into TensorFlow Lite format for autonomous ofine execution on resource- constrained Android and iOS devices, eliminating depen- dency on rural network infrastructure.
-
Multi-label Classication: Extension of the classica- tion head to support multi-label prediction to handle concurrent multi-disease foliar infections using sigmoid- activated output layers.
-
IoT Micro-climate Sensoring: Integration of low-cost in-eld IoT soil-moisture sensors and hyper-local tem- perature probes to replace macro-level satellite weather API data with precise localized telemetry, substantially improving the resolution and accuracy of CRI computa- tion.
-
Continuous Dataset Expansion: Systematic augmenta- tion of the training corpus with regionally specic crops and indigenous pathogen classes to improve geographic generalizability across diverse agrarian ecosystems.
-
-
CONCLUSION
This paper presented AgriVision AI, a comprehensive hy- brid architecture designed to modernize phytosanitary mon- itoring by rectifying the prevailing research gapthe ex- clusion of environmental intelligence from computer vision- based plant diagnostics. By synergizing a high-accuracy CNN (achieving 96.4% classication accuracy) with a formally dened threshold-based Climate Risk Index computed by the Vivasayam AI module, the proposed system establishes a robust paradigm for sustainable precision agriculture.
The empirical results substantiate that the integrated dual- axis diagnostic frameworkcombining reactive disease iden- tication with proactive climate risk forecasting through the Diagnostic Synthesizeryields a uniquely comprehensive agricultural tool. The ablation study conrms the measur- able contribution of each preprocessing component, particu- larly CLAHE enhancement and spatial augmentation, to nal model performance. AgriVision AI effectively demonstrates that bridging the gap between sophisticated data science and smallholder agriculture not only accelerates immediate crisis response but fosters a proactive, data-informed cultivation environment essential for future agrarian resilience.
REFERENCES
-
Food and Agriculture Organization (FAO), New standards to curb the global spread of plant pests and diseases, FAO News Article, Rome, 2019.
-
S. P. Mohanty, D. P. Hughes, and M. Salathe, Using deep learning for image-based plant disease detection, Frontiers in Plant Science, vol. 7,
p. 1419, Sep. 2016.
-
K. P. Ferentinos, Deep learning models for plant disease detection and diagnosis, Computers and Electronics in Agriculture, vol. 145, pp. 311 318, 2018.
-
J. G. A. Barbedo, Factors inuencing the use of deep learning for plant disease recognition, Biosystems Engineering, vol. 172, pp. 8491, 2018.
-
S. Sladojevic, M. Arsenovic, A. Anderla, D. Culibrk, and D. Stefanovic, Deep neural networks based recognition of plant diseases by leaf image classication, Computational Intelligence and Neuroscience, vol. 2016, 2016.
-
S. S. Dahikar and S. V. Rode, Agricultural crop yield prediction using articial neural network approach, International Journal of Innovative Research in Electrical, Electronics, Instrumentation and Control Engi- neering, vol. 2, no. 1, pp. 683686, 2014.
-
J. Rehak, M. Tichy, P. Benda, and K. Klem, Agriculture applications of the Internet of Things: A review, Computers and Electronics in Agriculture, vol. 170, p. 105244, 2020.
-
A. Kamilaris and F. X. Prenafeta-Boldu, Deep learning in agriculture: A survey, Computers and Electronics in Agriculture, vol. 147, pp. 7090, Apr. 2018.
-
V. Singh and A. K. Misra, Detection of plant leaf diseases using image segmentation and soft computing techniques, Information Processing in Agriculture, vol. 4, no. 1, pp. 4149, 2017.
-
X. E. Pantazi, D. Moshou, and A. A. Tamouridou, Active learning criteria for plant species identication using hyperspectral imaging, Biosystems Engineering, vol. 182, pp. 214, 2019.
-
R. Karthik, M. Hariharan, S. Anand, P. Mathikshara, A. Johnson, and
R. Menaka, Attention embedded residual CNN for disease detection in tomato leaves, Applied Soft Computing, vol. 86, p. 105933, 2020.
