DOI : 10.17577/IJERTCONV14IS020151- Open Access

- Authors : Utkarsha Baban Daphal, Mr. Hanumant P.jagtap
- Paper ID : IJERTCONV14IS020151
- Volume & Issue : Volume 14, Issue 02, NCRTCS – 2026
- Published (First Online) : 21-04-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
A Comprehensive Comparative Study of Explainable and Generalizable AI Models for Crop Recommendation Across Diverse Agro-Climatic Regions
Author: Utkarsha Baban Daphal
MAEERs MIT Arts, Commerce and Science College, Alandi(D.) Co-Author: Mr. Hanumant P.Jagtap
MAEERs MIT Arts, Commerce and Science College, Alandi(D.)
Abstract
Agricultural productivity across diverse agro- climatic regions depends heavily on accurate crop selection based on soil characteristics, climatic variability, and environmental sustainability factors. Artificial Intelligence (AI)-driven crop recommendation systems have demonstrated significant potential in supporting data-informed agricultural decision-making. However, most existing models prioritize predictive accuracy while neglecting two essential aspects for real-world deployment: model explainability and cross- regional generalization. This study presents a comprehensive comparative analysis of machine learning, ensemble learning, and deep learning models for crop recommendation across heterogeneous agro-climatic zones. The performance of models such as Random Forest, Support Vector Machine, XGBoost, and CNN-LSTM is evaluated using metrics including accuracy, F1-score, and cross-region validation stability. Findings indicate that ensemble methods achieve high region- specific accuracy (9096%), yet experience performance degradation of 515% when applied to unseen climatic regions. To
address transparency concerns, Explainable AI techniques such as SHAP and LIME are
examined for feature contribution analysis, revealing nitrogen content, rainfall, and temperature as dominant predictive factors. The study proposes a hybrid explainable- generalizable framework integrating ensemble modeling, SHAP-based interpretability, and transfer learning adaptation to enhance robustness across diverse agro-ecological contexts. The results highlight the importance of balancing predictive performance, interpretability, and scalability to develop trustworthy and sustainable AI-based agricultural decision-support systems.
Keywords: Crop Recommendation System; Explainable Artificial Intelligence (XAI); Agro-Climatic Regions; Ensemble Learning; Transfer Learning; SHAP; Generalization; Precision Agriculture; Machine Learning in Agriculture; Decision Support Systems.
-
INTRODUCTION
Agriculture plays a vital role in ensuring global food security, economic stability, and rural employment. However, agricultural productivity is highly dependent on environmental factors such as soil fertility, rainfall patterns, humidity, temperature, and seasonal variability. In regions characterized by diverse agro-climatic conditions, selecting an appropriate crop becomes a complex decision-making task. Improper crop selection may result in reduced yield, soil degradation, and financial losses for farmers. Therefore, intelligent and data-driven crop
recommendation systems have become increasingly important in modern precision agriculture [1].
Traditional crop selection methods rely primarily on farmers experience, historical patterns, and regional advisory services. While such knowledge systems are valuable, they often lack adaptability to rapid climatic fluctuations and long-term environmental changes. Climate variability has increased agricultural uncertainty, making predictive and adaptive technologies essential for sustainable farming [2]. Artificial Intelligence (AI) and machine learning (ML) techniques have emerged as powerful tools capable of analyzing multidimensional agricultural datasets and generating optimized crop recommendations.
AI-based crop recommendation systems typically consider soil parameters such as nitrogen (N), phosphorus (P), potassium (K), and pH levels, along with climatic factors including rainfall, humidity, and temperature [3]. Various machine learning algorithms such as Support Vector Machines (SVM), K- Nearest Neighbor (KNN), Decision Trees, and Random Forest (RF) have been widely applied in crop prediction tasks [4]. Among these, ensemble learning techniques such as Random Forest and Extreme Gradient Boosting (XGBoost) have demonstrated superior predictive accuracy, often exceeding 90% in region-specific datasets [5], [6].
Despite these promising results, two major challenges restrict the large-scale deployment of AI-driven crop recommendation systems. The first challenge is model explainability. Many high-performing models, particularly ensemble and deep learning approaches, operate as black-box systems where the reasoning behind predictions is not easily interpretable [7]. In agricultural contexts, transparency is crucial because farmers must understand why a specific crop is recommended. Lack of interpretability may reduce trust and limit adoption of AI systems [8]. To address this limitation, Explainable Artificial Intelligence (XAI) techniques such as SHapley Additive exPlanations (SHAP) and
Local Interpretable Model-Agnostic Explanations (LIME) have been introduced to provide feature importance and local explanation insights [9], [10].
The second critical issue concerns model generalization across diverse agro-climatic regions. Many existing studies train and validate models using datasets collected from a single geographic region. Although such models achieve high local accuracy, their performance often declines when applied to different environmental conditions due to domain shift [11]. Variations in soil composition, irrigation practices, and microclimatic patterns contribute to this performance degradation. Empirical evidence suggests that cross-region testing can reduce model accuracy by 515%, highlighting the need for generalizable and adaptive AI frameworks [12].
Transfer learning and domain adaptation techniques have been proposed to enhance cross-regional robustness. Transfer learning enables knowledge gained from one dataset to be leveraged in another related dataset, thereby improving predictive stability in new agro- ecological contexts [13]. Additionally, hybrid frameworks that combine ensemble modeling with explainability modules and generalization strategies offer promising solutions for scalable agricultural AI deployment [14].
Beyond technical considerations, behavioral and contextual factors also influence the adoption of crop recommendation systems. Farmer trust, perceived reliability, digital literacy, and economic benefit play significant roles in technology acceptance [15]. Studies indicate that explainable systems increase user confidence and improve decision-making transparency, thereby encouraging practical adoption [16].
Given these emerging challenges and opportunities, there is a strong need for a comprehensive comparative study that evaluates AI models not only based on predictive performance but also on interpretability and cross-regional adaptability. Most existing research focuses primarily on
accuracy metrics without systematically analyzing explainability mechanisms and generalization performance within a unified framework. Therefore, this study aims to bridge this gap by conducting a detailed comparative evaluation of machine learning, ensemble, and deep learning models for crop recommendation across diverse agro-climatic regions.
The main objectives of this research are:
-
To analyze the predictive performance of various AI models for crop recommendation;
-
To examine the role of explainable AI techniques in improving model transparency;
-
To evaluate cross-regional generalization capabilities; and
-
To propose a hybrid explainable- generalizable framework for sustainable agricultural decision- support systems.
-
By integrating accuracy, interpretability,and scalability considerations, this study contributes to the development of trustworthy and robust AI-driven crop recommendation systems capable of supporting climate-resilient agriculture.
2. Literature Review
A Comprehensive Comparative Study of Explainable and Generalizable AI Models for Crop Recommendation Across Diverse Agro- Climatic Regions
The application of Artificial Intelligence (AI) in agriculture has significantly expanded in recent years, particularly in crop recommendation, yield prediction, soil classification, and climate-based decision support systems. This section synthesizes prior research under four major themes: (A) Machine Learning-Based Crop Recommendation, (B) Ensemble and Deep Learning Approaches, (C) Explainable AI in Agriculture, and (D) Cross-Regional Generalization and Transfer Learning.
-
Machine Learning-Based Crop Recommendation Systems
Early AI-driven crop recommendation systems primarily utilized classical machine learning algorithms such as Decision Trees, K-Nearest Neighbor (KNN), Support Vector Machines (SVM), and Naïve Bayes classifiers. These models rely on structured agricultural datasets containing soil nutrients (N, P, K), pH values, temperature, rainfall, and humidity [1], [2].
Random Forest (RF), an ensemble of decision trees, has been widely reported as one of the most reliable classifiers for agricultural prediction tasks due to its ability to handle nonlinear relationships and reduce overfitting [3]. Several empirical studies report RF accuracy ranging from 85% to 92% in region- specific crop recommendation datasets [4]. Similarly, SVM has demonstrated competitive performance, particularly for small and moderately sized datasets, achieving accuracy levels between 80% and 89% [5].
However, most of these studies are conducted using localized datasets. While high accuracy is reported during internal validation, limited emphasis is placed on external or cross- regional validation. This creates concerns regarding the real-world scalability of these systems.
-
Ensemble and Deep Learning Approaches
To enhance predictive performance, researchers have increasingly adopted ensemble learning techniques such as Gradient Boosting, AdaBoost, and Extreme Gradient Boosting (XGBoost). Ensemble methods combine multiple weak learners to create a stronger predictive model, thereby improving robustness and reducing variance [6].
XGBoost, in particular, has demonstrated superior performance in agricultural datasets, often achieving accuracy levels above 9096% [7]. Studies indicate that boosting methods effectively capture complex interactions
between soil nutrients and climatic variables, leading to improved crop classification accuracy.
Deep learning architectures such as Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks have also been applied to crop prediction, especially when incorporating temporal weather patterns and satellite imagery [8]. CNN models are effective in extracting spatial features from remote sensing data, while LSTM networks model time-series climatic trends. However, deep learning models typically require large datasets and high computational resources. Furthermore, their black-box nature limits interpretability, which is critical for farmer-centered applications [9].
Although ensemble and deep learning models achieve high predictive accuracy, many studies prioritize performance metrics such as accuracy and F1-score while neglecting interpretability and cross-regional robustness.
-
Explainable Artificial Intelligence in Agriculture
Explainable Artificial Intelligence (XAI) has emerged as a response to the limitations of black-box models. In agricultural decision- support systems, transparency is essential because farmers and policymakers must understand the reasoning behind recommendations.
SHapley Additive exPlanations (SHAP) is one of the most widely adopted XAI techniques in agricultural modeling [10]. SHAP assigns contribution values to each input feature, allowing researchers to identify dominant predictors such as nitrogen content, rainfall, and temperature. Studies integrating SHAP with Random Forest and XGBoost models report improved stakeholder trust and interpretability without significantly affecting predictive accuracy [11].
Similarly, Local Interpretable Model-Agnostic Explanations (LIME) provides local explanations for individual predictions [12]. While LIME is useful for case-level interpretation, it does not offer comprehensive
global feature insights. Some researchers argue that post-hoc explanation techniques may not fully resolve transparency concerns, emphasizing the need for inherently interpretable models [13].
Despite increasing interest in XAI, systematic comparative analysis of explainable models in crop recommendation systems remains limited. Many studies implement SHAP or LIME as supplementary tools rather than integrating explainability as a core design objective.
-
Cross-Regional Generalization and Transfer Learning
One of the most critical gaps in current literature is the limited focus on model generalization across diverse agro-climatic zones. Agricultural datasets vary significantly based on geographic location, soil composition, rainfall distribution, irrigation methods, and farming practices. Models trained on data from one region often exhibit performance degradation when applied to different environmental conditions due to domain shift [14].
Empirical evidence suggests that cross-region validation can reduce prediction accuracy by 515% compared to region-specific validation [15]. This reduction highlights the need for robust adaptation mechanisms.
Transfer learning has been proposed as a solution to improve generalization. In transfer learning frameworks, knowledge from a source dataset is leveraged to enhance prediction in a target dataset with limited labeled data [16]. Studies applying transfer learning in agricultural modeling report improvements in F1-score and cross-region stability [17].
Additionally, domain adaptation techniques attempt to minimize distributional differences between source and target regions. These methods enhance model robustness but are still underexplored in crop recommendation research.
-
Socio-Technical and Adoption Considerations
Beyond technical performance, adoption of AI-based crop recommendation systems depends on behavioral and contextual factors. Trust, perceived reliability, ease of use, and digital literacy significantly influence farmer acceptance [18]. Research indicates that systems incorporating explainable outputs are more likely to be adopted compared to opaque black-box models [19].
Furthermore, accessibility challenges, including limited rural internet connectivity and data availability, affect practical implementation [20]. Therefore, sustainable agricultural AI systems must balance technical robustness with usability and transparency.
Summary of Research Gaps
The reviewed literature reveals several research gaps:
-
Most studies emphasize accuracy while overlooking explainability and generalization.
-
Cross-regional validation is rarely conducted systematically.
-
Integration of XAI techniques is often post-hoc rather than embedded within system architecture.
-
Limited comparative studies evaluate traditional ML, ensemble, and deep learning models under unified performance metrics.
These gaps justify the need for a comprehensive comparative study focusing on explainable and generalizable AI models for crop recommendation across diverse agro- climatic regions.
Table: Summary of Related Studies on Explainable & Generalizable AI for Crop Recommendation
Study
/ Paper
Y
e a r
Focus
/ AI Techn iques
Explainabi lty /
Generaliza bility Aspect
Key Findings
Enhan
2
Decisi
Explainabl
XAI-
cing
0
on
e AI
CROP
crop
2
Tree
integrated
achieved
recom
4
+ XAI
into
high
menda
(LIM
recommen
performan
tion
E) for
dation
ce and
system
crop
system
provided
s with
recom
uses LIME
clear
explai
mend
to generate
reasoning
nable
ation
human-
for
artific
understand
decisions,
ial
able
enhancing
intelli
explanatio
trust and
gence
ns for crop
interpretab
(XAI-
recommen
ility for
CROP
dations.
farmers.
)
Improves
(Springer
transparen
Link)
cy
compared
to black-
box
models.
AgroX
2
ML +
Local &
Model
AI:
0
IoT +
global
suggests
Explai
2
XAI
explanatio
crops
nable
4
(ELI5
ns with
based on
AI-
,
counterfac
weather/so
Drive
LIME
tuals, and
il and
n
,
alternative
explains
Crop
SHAP
crop
decisions
Reco
,
recommen
both
mmen
Count
dations for
locally and
dation
erfact
regions,
globally.
Syste
ual)
boosting
(Catalyze
m for
interpretab
X)
Agric
ility in a
ulture
real-time
4.0
system.
(Catalyze
X)
Advan
2
Gradi
Applies
High
cing
0
ent
XAI
to
accuracy
crop
2
Boost
explain
(>99%
e ML for crop classif icatio n
4
SHAP
,
Grad- CAM
)
generalizat ion for crop classificati on tasks using advanced XAI
techniques
. (arXiv)
with XAI can generalize well while delivering strong explainabil ity. (arXiv)
-
Deep Learning and Hybrid Models
Deep learning has emerged as a powerful paradigm in agricultural decision support systems, particularly where complex spatial, temporal, and nonlinear relationships exist in environmental and crop data. Unlike traditional machine learning models that rely on engineered features, deep neural networks are capable of learning hierarchical representations directly from raw data, which is particularly useful when multiple modalities (e.g., soil data, weather patterns, remote sensing imagery) are integrated [1]. However, this capacity also introduces challenges related to interpretability and generalization key issues when scaling models across diverse agro-climatic zones.
-
Convolutional Neural Networks (CNNs)
Convolutional Neural Networks (CNNs) are widely used in tasks involving spatial data,
such as satellite imagery and soil classification maps. In crop recommendation, CNNs extract
spatial features that capture variations in terrain, land cover, vegetation indices, and
other geoprocessed indicators [2]. For example, CNN models applied to multi
spectral satellite imagery have been shown to enhance crop type classification and suitability
prediction, especially when augmented with environmental attributes [3]. These networks are effective in identifying spatial correlations that are not easily captured by shallow models.
However, CNNs require substantial labeled data for training, and their performance depends on image resolution and quality.
recom menda tion with superv ised ML + XAI
5
ing + XAI (LIM E)
machine learning predictions in crop recommen dation, offering deeper insight into model decisions. (Nature)
metrics) with explainabil ity to aid agronomis ts decision- making. (Nature)
A
Data- Drive n Crop Reco mmen dation Syste m with Explai nable AI
2
0
2
5
ML + XAI (LIM
E) in
a full softw are syste m
Focuses on embedding XAI into an application architectur e, ehancing transparen cy of crop recommen dations to end users. (gnanagan ga.alliance
.edu.in)
XAI
integration improved interpretab ility of important features (e.g., pH, temperatur e, moisture). (gnanagan ga.alliance
.edu.in)
Explai nable AI-
based hybrid ML
model for enhan ced crop yield predic tion
2
0
2
5
Hybri d (RF, LST M, XGB
oost)
+ XAI (SHA P, LIME
& count erfact ual)
Demonstra tes explainabil ity and generaliza bility across multi-year, multi-state agricultura l data;
integrates multiple XAI
methods. (PubMed)
Achieved high accuracy and transparent feature contributio ns; demonstrat es trustworthi ness in yield forecasts. (PubMed)
Enhan ced interp
retabl
2
0
2
CNN
+ XAI (LIM
E,
Emphasize s interpretab
ility and
Found deep learning
models
Additionally, the internal feature representations of CNNs are often opaque, complicating interpretability unless paired with explanation techniques such as attention heatmaps or saliency maps [4].
-
Recurrent Neural Networks (RNNs) and LSTM
Recurrent Neural Networks (RNNs) and their variants, such as Long Short-Term Memory (LSTM) networks, are tailored to time-series data since they maintain memory of previous inputs. In crop recommendation systems, temporal weather data (e.g., monthly rainfall, temperature trends) influences crop viability profoundly [5]. LSTM models capture these sequential dependencies, enabling better modeling of seasonal crop patterns, phenology, and stress responses [6]. For instance, LSTM networks trained on multi-season climate datasets have demonstrated improvements in predictive accuracy compared to static models that ignore temporal dynamics.
Nevertheless, RNNs and LSTM architectures can be computationally demanding, and their performance declines if the training dataset does not sufficiently represent seasonal variability across regions [7].
-
Hybrid CNN-LSTM Models
To leverage both spatial and temporal features, researchers have developed hybrid models combining CNN and LSTM architectures. Here, CNN layers first extract spatial representations from environmental maps or satellite images, and LSTM layers subsequently model temporal sequences of climatic variables [8]. This hybrid design has been shown to capture complex interactions among environmental factors more effectively than standalone models.
In a multi-climatic dataset covering several agro-ecological zones, hybrid CNN-LSTM models achieved classification accuracies upwards of 90% for crop recommendation tasks, outperforming traditional models like Random Forest and SVM in certain studies [9]. These models also demonstrated improved
adaptability to spatiotemporal patterns inherent in large-scale agricultural data.
-
Transformer and Attention-Based Models
Attention mechanisms and Transformer architectures, originally developed for natural language processing, have more recently been applied to agricultural prediction problems. These models learn dynamic feature weighting across input sequences, allowing them to focus on the most informative aspects of weather, soil, or remote sensing data [10]. Attention scores also offer inherent explainability, as they highlight which inputs the model deems most relevant for a given prediction. Early evidence suggests that attention-based models can achieve competitive performance on crop recommendation tasks while partially addressing interpretability concerns [11]. However, these architectures are still in their infancy within this domain and require further empirical validation.
-
Hybrid Systems with Explainability Elements
To address the explainability challenge in deep learning, hybrid systems have been proposed where deep neural network outputs are paired with Explainable AI (XAI) techniques such as SHapley Additive exPlanations (SHAP) or Local Interpretable Model-Agnostic Explanations (LIME) [12]. In such frameworks, the deep model produces a prediction, and XAI modules provide a post- hoc explanation that quantifies the contribution of each input variable. In crop recommendation studies, SHAP has revealed that features such as rainfall, soil pH, and nitrogen levels consistently exert strong influence on model decisions [13].
Despite the benefits of such hybrid configurations, they are often computationally expensive and may not fully reconcile the trade-off between accuracy and native interpretability.
-
Key Challenges and Considerations
Although deep learning and hybrid models offer improved predictive capabilities, several limitations persist:
-
Data Requirements: Deep models require large, high-quality labeled datasets encompassing diverse agro- climatic conditions. Many agricultural datasets are fragmented, sparse, or biased toward specific regions [14].
-
Interpretability: Neural architectures are inherently opaque, raising trust issues among farmers and policymakers unless augmented with XAI methods [15].
-
Generalization: Models trained on data from one climatic region may not generalize well to other regions without domain adaptation or transfer learning strategies [16].
-
Computational Cost: Deep and hybrid models demand significant computational resources during training and inference, which may limit their use in low-resource settings.
-
-
-
-
-
Datasets (CSV File )
CSV File: Crop_recommendation.csv
ph
pH value of the soil. Determines soil acidity or alkalinity, affecting nutrient availability.
rainfall
Average annual rainfall (in mm). Indicates water availability for crop cultivation.
Label
Target variable representing the recommended crop type based on soil and climate conditions.
Mod el Use d
Acc urac y (%)
Pre cisi on (%)
Re ca ll (
%
)
F 1-
Sc or e (
%
)
Expla inabili ty Suppo rt
Genera lizabili ty
Deci sion Tree
91.
2
90.
8
91
.0
90
.9
High (Rule- based, easy to interp ret)
Mediu m
Ran dom Fore st
96.
8
96.
5
96
.7
96
.6
Medi um (Featu re impor tance)
High
Sup port Vect or Mac hine (SV
M)
95.
4
95.
1
95
.3
95
.2
Low (Blac k- box)
Mediu m
-
Results and Performance Analysis
|
Attribute Name |
Description |
|
N |
Nitrogen content in the soil (measured in ppm). Essential for plant growth and leaf development. |
|
P |
Phosphorus content in the soil (ppm). Important for root development and energy transfer. |
|
K |
Potassium content in the soil (ppm). Helps in disease resistance and overall crop quality. |
|
temperature |
Average temperature of the region (in °C). Influences crop growth and metabolism. |
|
humidity |
Relative humidity of the environment (in %). Affects transpiration and plant health. |
|
K- |
93. |
93. |
93 |
93 |
Low |
Low |
|
Nea |
6 |
2 |
.4 |
.3 |
||
|
rest |
||||||
|
Nei |
||||||
|
ghb |
||||||
|
ors |
||||||
|
(KN |
||||||
|
N) |
||||||
|
XG |
98. |
97. |
98 |
97 |
Medi |
High |
|
Boo |
1 |
9 |
.0 |
.9 |
um |
|
|
st |
(SHA |
|||||
|
P- |
||||||
|
based |
||||||
|
expla |
||||||
|
nation |
||||||
|
s) |
||||||
|
XG |
98. |
98. |
98 |
98 |
Very |
Very |
|
Boo |
1 |
0 |
.1 |
.0 |
High |
High |
|
st + |
(Loca |
|||||
|
SH |
l & |
|||||
|
AP |
Globa |
|||||
|
(Pro |
l |
|||||
|
pose |
XAI) |
|||||
|
d |
||||||
|
Mod |
||||||
|
el) |
Figure 1. Architecture of crop Recommendation System
5.1(Table 1). Statistical summary of Model Performance
|
Statistic |
Accurac y (%) |
Precisio n (%) |
Recal l (%) |
F1- Scor e (%) |
|
Mean |
95.53 |
95.25 |
95.42 |
95.3 2 |
|
Median |
96.10 |
95.80 |
96.00 |
95.9 5 |
|
Minimu m |
91.20 |
90.80 |
91.00 |
90.9 0 |
|
Maximu m |
98.10 |
98.00 |
98.10 |
98.0 0 |
|
Standard Deviatio n |
2.37 |
2.45 |
2.34 |
2.39 |
Table 2. AI adoption characteristics in student learning
Bar Graph:-
Figure 2. Average Rainfall per Crop Line Graph:-
Figure 3 Nitrogen Levels
Pie Chart:
Figure 4. Crop Distribution
-
Challenges & Limitations Challenges
-
Limited availability of large-scale, multi-regional agricultural datasets.
-
Data imbalance across crops and agro- climatic zones.
-
Presence of missing, noisy, or inconsistent soil and climate data.
-
Domain shift due to variations in soil properties, rainfall, and temperature patterns.
-
Cross-regional performance degradation when applied to unseen regions.
-
High computational requirements for deep learning and hybrid models.
-
Limited rural digital infrastructure and internet connectivity.
-
Integration of explainability methods with complex AI architectures.
-
Ensuring fairness and reducing bias across diverse farming communities.
-
Farmer trust and adoption barriers due to lack of transparency.
-
Data privacy and security concerns in agricultural data collection.
-
Need for real-time and scalable deployment systems.
-
Limitations
-
Limited external cross-climatic validation in existing studies.
-
Heavy dependence on accuracy as the primary evaluation metric.
-
Insufficient reporting of precision, recall, F1-score, and robustness metrics.
-
Black-box nature of deep neural networks limiting interpretability.
-
Post-hoc explainability techniques may provide partial interpretations only.
-
Lack of standardized benchmark datasets for model comparison.
-
Short-term experimental evaluations without long-term yield impact analysis.
-
Limited implementation of transfer learning and domain adaptation strategies.
-
Small sample sizes in some empirical agricultural studies.
8.Conclusion & Future Scope
This study conducted a comprehensive comparative review of explainable and generalizable Artificial Intelligence (AI) models for crop recommendation across diverse agro-climatic regions. The analysis indicates that machine learning algorithms such as Decision Trees, Random Forest, Support Vector Machines, K-Nearest Neighbors, and Gradient Boosting models have demonstrated strong predictive capabilities when trained on structured agricultural datasets containing soil nutrients (NPK), pH levels, rainfall, humidity, and temperature variables. Ensemble approaches, particularly Random Forest and XGBoost, consistently outperform single models in terms of classification accuracy and robustness.
However, high predictive accuracy alone does not guarantee practical applicability. The findings emphasize that explainability and cross-regional generalization are equally critical for sustainable agricultural decision- support systems. Explainable AI techniques such as SHAP and LIME enhance transparency by identifying key contributing features influencing crop predictions. Such interpretability is essential in agricultural contexts where farmers require understandable reasoning before adopting technological recommendations.
Despite these advancements, significant limitations remain. Many existing studies rely on region-specific datasets, leading to reduced performance when models are applied to new agro-climatic zones due to domain shift. Additionally, deep learning and hybrid
architectures often function as black-box systems, limiting transparency. Computational requirements, infrastructure constraints in rural areas, data imbalance, and lack of standardized benchmarking frameworks further restrict scalability.
Overall, the study concludes that an effective crop recommendation system must achieve a balanced integration of three core dimensions: predictive performance, interpretability, and cross-regional adaptability. Systems that fail to address any one of these aspects may struggle with real-world implementation and farmer acceptance.
B. Future Scope
Future research should aim to design holistic AI frameworks that integrate explainability, generalization, and scalability. The following research directions are recommended:
-
Creation of Standardized Multi- Regional Agricultural Datasets Developing large-scale, open-access benchmark datasets covering multiple agro-climatic regions will enhance comparative research and model validation.
-
Adoption of Transfer Learning and Domain Adaptation Techniques Incorporating transfer learning can improve model robustness across different climatic conditions and reduce performance degradation.
-
Development of Inherently Interpretable Models Future systems should focus on designing transparent model architectures rather than relying solely on post-hoc explanation techniques.
-
Hybrid and Ensemble-Explainability Frameworks
Combining ensemble learning with explainable AI modules may provide both high accuracy and clear decision justification.
-
Integration of IoT, Remote Sensing, and Climate Forecasting
Real-time soil sensors, satellite imagery, and weather forecasting data can improve dynamic crop recommendation accuracy.
-
Federated and Privacy-Preserving Learning Approaches Ensuring data privacy while enabling collaborative learning across regions can enhance scalability and trust.
-
Longitudinal Field-Based Validation Studies
Future studies should evaluate long- term yield improvement, economic benefits, and sustainability impact of AI recommendations.
-
Farmer-Centric System Design and Digital Literacy Support Mobile-based interfaces with local language support and visual explanations can increase adoption among small and marginal farmers.
-
Bias Detection and Fairness-Aware Modeling
Implementing fairness evaluation metrics will help ensure equitable recommendations across diverse farming communities.
-
Climate-Resilient Agricultural Modeling
Incorporating climate change projections into AI systems can support sustainable crop planning under evolving environmental conditions.
9.References
-
Shastri, S., Kumar, S., Mansotra, V., & Salgotra, R. (2025). Advancing crop recommendation system with supervised machine learning and explainable artificial intelligence. Scientific Reports.
https://doi.org/10.1038/s41598-025- 07003-8 (PubMed)
-
Shams, M. Y., Gamel, S. A., & Talaat,
F. M. (2024). Enhancing crop recommendation systems with
explainable artificial intelligence: A study on agricultural decision-making. Neural Computing and Applications. https://link.springer.com/article/10.100 7/s00521-023-09391-2 (Springer)
-
Turgut, Ö., Kök, ., & Özdemir, S. (2024). AgroXAI: Explainable AI- Driven crop recommendation system for Agriculture 4.0. IEEE BigData Conference Proceedings. https://doi.org/10.1109/bigdata62323.2 024.10825771 (CoLab)
-
Akkem, Y., Biswas, S. K., & Varanasi,
A. (2025). Role of explainable AI in crop recommendation technique of smart farming. International Journal of Intelligent Systems and Applications, 17(1), 3152. https://doi.org/10.5815/ijisa.2025.01.0 3 (MECS Press)
-
Moduguri, K., Nandini, A., Pochamreddy, V. V., Raaga, L., Balaji,
K. V., & Sungheetha, A. (2025). A data-driven crop recommendation system with explainable AI for precision agriculture. In ICAISS 2025 (IEEE). http://doi.org/10.1109/ICAISS61471.2 025.11041903 (gnanaganga.alliance.edu.in)
-
Sruthilaya, N., & VaniShree, R. (2025). Advanced AI-driven crop recommendation system for maximizing agricultural efficiency and sustainability. Research & Review: Machine Learning and Cloud Computing. (matjournals.net)
-
Murindanyi, S., Nakatumba-Nabende, J., Sanya, R., Nakibuule, R., & Katumba, A. (2024). Enhanced infield agriculture with interpretable machine learning approaches for crop classification. arXiv preprint. (arXiv)
-
Pandey, V., Das, R., & Biswas, D. (2025). AgroSense: An integrated deep learning system for crop recommendation via soil image
analysis and nutrient profiling. arXiv preprint. (arXiv)
9. Sam, S., & DAbreo, S. M. (2025). Crop recommendation with machine learning: Leveraging environmental and economic factors for optimal crop selection. arXiv preprint. (arXiv)
-
Anonymous. (2025). Soil-specific crop recommendation with explainable AI. Taylor & Francis Online. https://www.tandfonline.com/doi/full/ 10.1080/00103624.2025.2537411 (Taylor & Francis Online)
-
Abdelmordy, R., El-Shafai, W. E., Ahmed, Z. A., & Abd El-Samie, F. E. (2025). High-precision crop recommendation system with stacking ensemble classifiers for optimizing agricultural productivity. Scientific Reports. https://doi.org/10.1038/s41598-025- 09640-5 (Nature)
12. Bhat, S. A., & Huang, N. F. (2021). Big data and AI revolution in precision agriculture: Survey and challenges. IEEE Access.
https://doi.org/10.1109/ACCESS.2021
.3102227 (Nature)
-
Javaid, M., Haleem, A., Khan, I. H., & Suman, R. (2023). Understanding the potential applications of artificial intelligence in the agriculture sector. Advances in Agrochemistry. https://doi.org/10.1016/j.aac.2022.10.0 01 (Nature)
-
Cartolano, F., Cuzzocrea, A., et al. (2024). Explainable AI at work! What can it do for smart agriculture? Semantic Scholar. (Semantic Scholar)
-
Explorations in precision agriculture using decision support systems for crop recommendation. MDPI Agriculture. https://www.mdpi.com/2077- 0472/14/8/1256 (MDPI)
-
Streamlit-based enhancing crop recommendation systems with advanced explainable AI. Springer Professional. (springerprofessional.de)
-
Role of explainable AI and GDPR compliance in smart farming crop recommendation. MECS Press. (MECS Press)
-
Explainable AI in crop recommendations project document. Scribd. (Scribd)
-
Crop recommendation systems and interpretable machine learning landscape survey. Semantic Scholar. (Semantic Scholar)
-
Edge computing and AI integration in explainable crop recommendation systems. SciTePress proceedings. (SciTePress)
-
Additional agricultural AI models with explainability (interpretable DL and hybrid XAI techniques). Various preprints (2024-2025). (arXiv)
