DOI : https://doi.org/10.5281/zenodo.19468695
- Open Access

- Authors : Piyush Vinde, Aneesh Chavan, Rohit Gadai, Aarin Yadav, Dr. Sunny Sall
- Paper ID : IJERTV15IS031723
- Volume & Issue : Volume 15, Issue 03 , March – 2026
- Published (First Online): 08-04-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
AI Based Plant Disease Recommendation and Solution
Piyush Vinde
Department Of Computer Engineering St Johns College Of Engineering (Mumbai University) Palghar, Maharashtra, India
Aneesh Chavan
Department Of Computer Engineering St Johns College Of Engineering (Mumbai University) Palghar, Maharashtra, India
Aarin Yadav
Department Of Computer Engineering St Johns College Of Engineering (Mumbai University) Palghar, Maharashtra, India
Rohit Gadai
Department Of Computer Engineering St Johns College Of Engineering (Mumbai University) Palghar, Maharashtra, India
Dr. Sunny Sall
Assistant Professor, Department of Computer Engineering St Johns College Of Engineering (Mumbai University) Palghar, Maharashtra, India
Abstract – A productivity and food security, creating a need for intelligent and accessible diagnostic tools. This paper presents a web- based smart agriculture system that integrates deep learningbased plant disease recognition with an AI-driven advisory framework. A Convolutional Neural Network (CNN) is trained on a multi-classleaf image dataset to automatically classify plant diseases across 38 categories. User-submitted leaf images are processed to generate classifications and confidence scores, allowing the user to rapidly identify whether a leaf has a certain disease without assistance from an expert. The classification output can be generated through the Streamlit interface, which allows a user to upload an image or capture one with their camera to provide real-time feedback. In addition to generating classifications, the system will use AI to provide an explanation of the classification in a structured format that states the symptoms, causes, and recommended treatments for diseases. Unlike conventional static recommendation systems, the proposed approach leverages a large language model to provide adaptive and context-aware guidance.To enhance further enables users
to seek general farming guidance within the same platform.system offers scalable and user-centric solution for intelligent crop health management.
Keywords – CNN, tensor flow,Streamlit, deep learning
- INTRODUCTION
Agriculture remains a fundamental pillar of global food security and economic sustainability.The majority of the world’s people rely on agriculture (directly/indirectly) for their livelihoods, food security and global trade. Nevertheless, due to various plant diseases,plant diseases significantly impact agricultural productivity, with approximately 20% of global crop yield lost annually due to various plant infections.Plant diseases significantly impact agricultural productivity.A substantial fraction of agricultural produce loss can be attributed to plant diseases.[2][6].Fungal, bacterial, and viral infections affect plant leaves, stems, and fruits, disrupting photosynthesis and overall plant health. In many regions, delayed detection and improper treatment of such diseases result in economic losses, excessive pesticide usage, and long-term soil degradation.Conventional methods of plant disease identification rely on visual inspection by experts, which can be time- consuming and prone to human error Early and accurate identification of plant diseases is critical for plant health and growth(Early detection of plant diseases can be instrumental in maintaining global health and welfare.)
.[3][6].
However, many of these methods have limited sensitivity or specificity, and hence such conventional methods are not efficient
in tackling emerging crop diseases. [5]
The majority of existing detection methods identify diseases only after visible symptoms appear, making early-stage detection a significant challenge There is a need for rapid, non-destructive, and scalable detection techniques for timely disease management [4]A central challenge in preventing accidental pathogen dissemination and disease outbreaks is the difficulty in detecting many plant diseases at their early stages. Imaging-based tools can provide high-throughput and non-invasive disease detection, facilitating the automation of crop disease detection. [5] the way diseases are detected using images has changed significantly.
Convolutional neural networks (CNNs) can automatically extract hierarchical levels of visual features (e.g., texture, colors, and shape patterns) from raw images. Unlike traditional machine-learning techniques that require manual extraction of visual features and/or several iterations of re-evaluation to develop effective features, CNN-based models learn how to identify useful image features during training. This improves the accuracy of classifying the images as well as the generalizability of the CNN algorithm.Numerous studies have successfully applied CNN architectures to classify plant diseases across different crops, highlighting their potential for automated agricultural diagnostics. accuracy and generalization performance. Numerous studies have successfully applied CNN architectures to classify plant diseases across different crops, highlighting their potential for automated agricultural diagnostics.Additionally, traditional imaging techniques often face limitations such as environmental dependency and low efficiency in large-scale deployment [4].
While many existing works focus primarily on improving classification accuracy through transfer learning, ensemble models, or architecture optimization, fewer systems address the broader challenge of delivering actionable guidance to farmers in an accessible and interactive manner. In several implementations, disease detection is treated as an isolated task, where the model outputs a label without providing contextual explanation, treatment recommendations, or additional support mechanisms. However, effective crop management requires more than identification; it demands timely advisory information, clarity of diagnosis, and user-friendly deployment platforms that can be accessed by non-experts.
Motivated by these limitations, this work proposes a smart plant disease recognition and advisory system that integrates deep learningbased image classification with an AI-driven interactive assistance framework. The proposed system employs a custom- built Convolutional Neural Network developed using TensorFlow and Keras to classify plant leaf images into 38 disease and healthy categories derived from a publicly available dataset. The model processes 128×128 RGB images and outputs probabilistic predictions along with confidence scores, enabling rapid and reliable disease identification
Artificial Intelligence (AI) holds transformative potential across various sectors; however, its widespread adoption is hindered by multiple challenges across technical, ethical, and societal domains. As highlighted in previous research, AI systems require vast amounts of high-quality, labeled data, which is often unavailable or of poor quality, limiting their effectiveness in real- world applications. Additionally, many AI models are highly specialized and struggle to generalize beyond specific tasks, which restricts their broader applicability.[1].
To enhance usability and accessibility, the system is deployed as a web-based application using Streamlit, allowing users to upload leaf images or capture them directly through a device camera. The interface presents predicted disease labels, confidence values, and maintains a session-based prediction history for user reference. Recognizing the diverse needs of farmers, the platform delivers advisory content in multiple formats, including on-screen text, downloadable PD reports generated programmatically, and synthesized speech using text-to-speech technology. This multimodal feedback mechanism improves comprehension and supports users with varying literacy levels.
Artificial Intelligence (AI) holds transformative potential across various sectors, yet its journey to full realization and
widespread adoption is impeded by several significant obstacles.[1]
The primary contributions of this work can be summarized as follows:
- Development of a custom CNN-based multi-class plant disease classifier;
- Integration of an AI-powered advisory and chatbot module for interactive farmer support;
- Deployment of a full-stack, user-friendly web application for real-time usage;
- Implementation of multimode output mechanisms including text, audio and PDF reporting; and
- Incorporation of contextual weather information to support informed agricultural decision-making.
Overall, the proposed system aims to bridge the gap between automated disease detection and practical field-level decision support. Plant diseases significantly impact agricultural productivity, with approximately 20% of global crop yield lost annually due to various plant infections.[2]
- LITERATURE SURVEY
M. Rana et al. (2024) [1] analyzed the key obstacles hindering the full realization and adoption of Artificial Intelligence (AI) across various domains. The study categorizes these challenges into technical, ethical, societal, research, implementation, and regulatory aspects. It highlights major technical issues such as the requirement for large volumes of high-quality labeled data, lack of generalization, scalability limitations, and vulnerability to adversarial attacks. Additionally, the study emphasizes ethical and societal concerns including bias, privacy issues, and job displacement due to automation. Implementation challenges such as high development costs, integration with legacy systems, and shortage of skilled professionals are also discussed. The paper further identifies regulatory uncertainty and lack of universal ethical guidelines as barriers to widespread AI adoption, stressing the need for collaborative efforts among researchers, policymakers, and industries to overcome these challenges.
However, the study mainly provides a broad conceptual analysis of challenges without focusing on specific application domains or providing detailed experimental validation. Additionally, while it identifies multiple obstacles, it offers limited practical implementation strategies tailored to real-world deployment scenarios
A. K. Singh et al. (2024) [2] proposed a Vision Transformer-based plant disease detection system enhanced with a generative data augmentation technique called LeafyGAN. The approach combines pix2pix GAN for accurate leaf segmentation and CycleGAN for generating realistic synthetic disease patterns on leaf regions, thereby addressing the issue of limited and imbalanced datasets. A lightweight MobileViT classifier is trained on the augmented dataset to perform disease classification efficiently. The model achieved a high accuracy of 99.92% on the PlantVillage dataset and demonstrated competitive performance with fewer parameters compared to existing deep learning models, making it suitable for deployment on resource-constrained devices.
However, the approach relies heavily on synthetic data generation, which may not fully capture the variability of real-world field conditions. Additionally, while the model is designed to be lightweight, its performance on diverse real-world datasets such as PlantDoc is comparatively
lower, indicating challenges in generalization across different environments
U. Barman et al. (2024) [3] proposed a smartphone-based plant disease detection system called ViT-SmartAgri, which utilizes a Vision Transformer (ViT) model for identifying tomato leaf diseases. The system integrates deep learning with a mobile (Android) application to enable real-time disease diagnosis using images captured from smartphones. The model leverages self- attention mechanisms to capture global relationships between image patches, improving feature extraction and classification performance. The study used the PlantVillage dataset, consisting of 10,010 tomato leaf images across 10 disease classes, and achieved a testing accuracy of 90.99%, demonstrating its effectiveness for smart agriculture applications.
However, the system is primarily evaluated on the PlantVillage dataset, which contains controlled and well-structured images. As a result, its performance in real-world agricultural environments may vary due to challenges such as varying lighting conditions, background noise, and differences in image quality. Additionally, the models generalization across diverse field conditions may be limited, indicating the need for further validation using real-world datasets.
W. Liu et al. (2024)[4] proposed an early detection method for pine wilt disease using UAV-based RGB imaging combined with hyperspectral image reconstruction and SVM classification. The approach reconstructs hyperspectral data (400700 nm) from UAV images and extracts spectral features to distinguish infected and healthy trees, achieving improved detection accuracy and enabling early-stage disease identification .However, the study relies on a relatively small dataset (320 samples) and controlled data collection conditions, which may limit scalability and generalization. Additionally, factors such as high drone altitude, limited pixel resolution, and sensitivity to parameter tuning can affect model performance in real-world large-scale environments
X. Zhang et al. (2024)[5] proposed a hyperspectral imaging (HSI)-based approach for early detection of tomato bacterial leaf spot disease using machine learning models trained on spectral and vegetation index (VI) features. The study demonstrated that HSI can detect disease at pre-symptomatic stages and effectively differentiate bacterial spots from abiotic leaf spots. It also showed that using VI-based features improves classification performance by 2637% compared to raw spectral data, and that key wavelength bands (e.g., ~750 nm and ~1400 nm) are critical for early detection .However, the study is limited by controlled experimental conditions, use of detached leaf samples, and relatively small datasets, which may reduce real-world applicability. Additionally, whole-plant image classification showed lower accuracy due to variations in leaf angles and environmental factors, indicating challenges in scaling the approach for field deployment
A. H. Ali et al. (2024)[6] proposed an ensemble-based deep learning approach for plant disease classification using multiple architectures including DenseNet201, EfficientNetB0, EfficientNetB3, InceptionResNetV2, and ResNet50v2. The study integrates image preprocessing (CLAHE with adaptive median filtering) and class-weighted data balancing to improve model performance. The system was trained on the PlantVillage dataset (87,000 images, 38 classes) and evaluated using multiple ensemble combinations. The best-performing ensemble achieved 99.89% accuracy and high F1-score, demonstrating strong generalization and robustness in multi-class plant disease classification .
However, the approach requires high computational resources due to the use of multiple deep learning models in ensemble form, making it less suitable for deployment on low-resource devices such as smartphones. Additionally, despite high accuracy, the model lacks explainability and may face challenges in real-world field conditions beyond controlled datasets .However, the approach relies heavily on synthetic data generation, which may not fully represent real-world field conditions. Additionally, the
system focuses primarily on model performance and does not address user-level features suc as real-time deployment, explainability, or integration with farmer support tools
- METHODOLOGY
Many existing works focus primarily on classification accuracy, but fewer systems provide actionable guidance, contextual explanations, or interactive support for farmers
CNN-based models have emerged as a predominant approach for visual recognition tasks due to their ability to automatically learn hierarchical feature representations[3]
The system integrates deep learning-based disease detection with an AI-driven advisory framework to provide symptoms, causes, and treatment recommendations.[2]
The methodology consists of several stages including image acquisition, preprocessing, disease classification using a Convolutional Neural Network (CNN), and generation of intelligent recommendations for the user.
- Overall system Architecture
The overall architecture of the proposed plant disease recognition system is illustrated in Fig. 1. The system follows a modular architecture consisting of four main components: the user interface layer, the backend processing module, AI service integration, and the output generation module. These components work together to detect plant diseases from leaf images and provide intelligent treatment recommendations to users.The process begins with the user (farmer) interacting with the system through the frontend interface developed using Streamlit.Users can either upload or take a picture of a leaf from a plants leaf and use it to diagnose any problems with that particular type of leaf In addition to image input, users can also enter their city to
retrieve weather information and interact with the integrated agriculture chatbot to ask farming-related questions.Once the leaf image is submitted, it is transmitted to the backend processing module, which is implemented using Python and TensorFlow.The uploaded image will go through some operations to prepare the image to be used by a computer model to determine if the plant has a specific kind of problem or not and The processed image is then passed to the trained Convolutional Neural Network (CNN) model, which analyzes visual patterns such as leaf texture, color variations, and shape characteristics in order to classify the plant disease.After the prediction is generated, the system integrates with AI services, including the OpenAI API and weather API. The OpenAI API is used to generate intelligent treatment recommendations and explanations related to the detected disease, while the weather API retrieves environmental information that may influence plant health and crop management decisions.Limited labeled datasets and class imbalance remain significant challenges in training accurate plant disease classification models.[2]
Fig. 1. Overall system architecture of the AI Based Plant Disease Recommendation and Solution
- Dataset Description
The proposed system uses a publicly available plant leaf image dataset containing images of both healthy and diseased plants across multiple crop species. The dataset includes 38 different plant disease classes, representing various infections caused by fungi, bacteria, and viruses, as well as healthy leaf samples.Each image in the dataset is labeled according to the corresponding disease category, allowing the CNN model to learn visual patterns associated with different plant diseases. The dataset includes images captured under varying environmental conditions such as differences in lighting, leaf orientation, and background
complexity. This diversity improves the robustness and generalization capability of the trained model.To ensure reliable model training and evaluation, the dataset is divided into three subsets: the training set, validation set, and testing set. The training dataset is used to train the CNN model, while the validation dataset is used to monitor model performance and adjust training parameters. The testing dataset is used to evaluate the final model accuracy on unseen data.
- Image Preprocessing
Image preprocessing is important for improving how well deep learning models perform. The following steps will make sure all images fed into the CNN model are in a consistent format. First, all leaf images uploaded to the database are resized to a common dimension of 128 x128 pixel size (to meet the dimension requirements of the neural network). This will make it possible to have less computational complexity when using the model and be more efficient in terms of time when doing so. Next, the pixel values of each image must be normalized to be within the range of 0 to 1 (this will help provide stability to the training process and improve the learning ability of the neural network). After the pixel values of the images have been normalized, they can then be converted from image format into tensor format for use with the TensorFlow deep-learning framework. These preprocessing techniques will ensure that image data is consistent and formatted correctly, so that the model can classify plant diseases accurately.
- CNN Model Architecture
Fig. 2. Convolutional Neural Network (CNN) architecture
The plant disease classification module in the proposed system is implemented using a Convolutional Neural Network (CNN) developed with the TensorFlow and Keras deep learning frameworks. CNNs are widely used in image recognition tasks because they can automatically learn hierarchical visual features from raw image data, making them highly suitable for plant disease detection from leaf images Convolutional neural networks (CNNs) can automatically extract hierarchical visual features such as texture, color, and shape patterns from raw images. Unlike traditional machine learning techniques, CNN-based models learn feature representations automatically during training, improving classification accuracy and generalization.[2] The input to the model is a leaf image resized to 128 × 128 pixels with three RGB channels. This standardized input size ensures consistent processing and efficient model training.The first part/section of the CNN consists of Convolution Layers operated by a Rectified Linear Unit (ReLU) activation function to conduct feature extraction via several convolutionary filters produced and applied to the input image.
The convolutional layers detect low-level features such as edges, color variations, and texture patterns that are commonly associated with plant diseases.Applying the pooling layer uses downsampling to decrease the dimensions of each feature map generated from the convolutional layer. By applying downsampling, pooling lowers the size of each feature map and reduces the amount of computations involved in processing them further (during training). Pooling allows for the construction of generalised models with less flexibility and sensitivity to small differences in input images.After generating all feature maps, they will then be fed into a fully connected layer to reason and classify their features as high-level features. The fully connected layer integrates the patterns learned from the previous convolution and pooling operations, which ultimately prepares them to be classified as the final output of the network.The last layer, by using the Softmax activation function, produces a probability score for each possible class of plant disease based on the resulting feature maps. The maximum probability score determines which class is predicted to have the disease and identifies how confident the network is in making that prediction. Therefore, this type of architecture provides an accurate means of identifying plant diseases based on images of leaves and enables automated monitoring of crop health.
- Disease Prediction Process
Once the CNN model is trained, it is used to predict plant diseases from new leaf images uploaded by users through the web
interface. When a user uploads or captures a plant leaf imge, the system first performs preprocessing operations such as resizing and normalization to ensure that the image matches the input format required by the trained CNN model.The processed image is then passed to the CNN classification model, which analyzes the visual characteristics of the leaf such as color patterns, texture variations, and structural features. Based on the extracted features, the model computes probability scores for each disease class using the Softmax output layer. The class with the highest probability value is selected as the predicted disease category.Along with the predicted disease label, the system also generates a confidence score that indicates the reliability of the prediction. This confidence score helps users understand how certain the model is about the classification result. The prediction results are then forwarded to the advisory module for generating treatment suggestions and disease management recommendations.To maintain a record of system usage and predictions, the detected disease information can also be stored in the system database. This allows users to review previous diagnoses and helps in monitoring plant health conditions over time.
- AI-Powered Advisory Module
After detecting the plant disease, the system activates an AI-powered advisory module that provides detailed information and treatment guidance for the detected disease. This module is integrated using the OpenAI API, which enables the system to generate natural language explanations and recommendations related to plant diseases.The advisory module provides structured information including the symptoms of the disease, possible causes, preventive measures, and recommended treatment methods such as pesticide usage or crop management techniques. Unlike traditional rule-based systems, the use of conversational artificial intelligence allows the system to generate dynamic and context-aware responses that are easier for farmers to understand.In addition to disease-specific recommendations, the system also includes an agriculture chatbot that allows users to ask general farming-related questions. The chatbot provides real-time responses and helps farmers obtain useful information related to crop health, disease prevention, and agricultural practices.
- Web-Based Application Deployment
The system is to be deployed as a web-based application utilizing the Streamlit framework Streamlit provides a lightweight and interactive interface that allows users to easily interact with the plant disease detection system without requiring advanced technical knowledge. Through the Streamlit interface, users can upload plant leaf images or capture images directly using a device camera. The system processes the image and displays the predicted disease name along with the corresponding confidence score. To enhance usability, the platform also integrates additional features such as real-time weather information, which is retrieved using a weather API. This environmental data helps users understand conditions that may influence plant diseases. Furthermore, the system provides multimodal output formats including on-screen prediction results, AI-generated treatment suggestions, synthesized text-to-speech audio output, and downloadable PDF reports containing the diagnosis and recommended solutions. These features make the system more accessible and practical for farmers and agricultural practitioners.
- Overall system Architecture
- SYSTEM REQUIREMENT
To ensure the efficient functioning of the proposed AI-based plant disease recognition system, appropriate hardware and software resources are required. The system is designed to operate efficiently in a web-based environment and support real-time plant disease detection and recommendation generation. Proper system requirements help maintain performance, reliability, and scalability of the application when accessed by multiple users.
G. Hardware
The proposed system can be developed and executed on a standard computing environment. During the development and testing phase, the system can run on a personal computer with moderate hardware specifications. A computer with an Intel i5 processor or higher, 8 GB RAM, and at least 100 GB of storage is sufficient for model training, application development, and testing.For large-scale deployment and multi-user access, the application can be hosted on cloud-based servers. The cloud infrastructure enables the system to scale efficiently and process multiple user requests simultaneously.Additionally, GPU-enabled systems may be used during the model training phase to accelerate deep learning computations and reduce training time.
H. Software aspect
The proposed system is implemented using modern software technologies that support machine learning and web-based application development. The backend of the system is developed using the Python programming language, which provides powerful libraries for machine learning and data processing.The plant disease detection model is implemented using TensorFlow and Keras, which are widely used deep learning frameworks for building and training neural networks. Image processing operations are performed using libraries such as NumPy and OpenCV to handle image manipulation and preprocessing tasks.The user interface of the system is developed using the Streamlit framework, which allows the creation of interactive web applications for machine learning models. Streamlit provides an easy-to-use interface where users can upload plant leaf images and receive disease predictions in real time.Additionally, the system integrates external services such as the OpenAI API for generating disease explanations and treatment recommendationsFor generating additional outputs, libraries supporting text-to-speech conversion and PDF report generation are also integrated into the system.These software technologies enable efficient communication between system components and ensure smooth operation of the AI-based plant disease recognition platform.
- DESIGN AND IMPLEMENTATION
Fig.3. Flowchart
The flowchart shown in Fig. 3 illustrates the operational workflow of the proposed AI-based plant disease recognition system. The system follows a sequential process starting from image input and ending with disease prediction and treatment recommendation.
- Start
The process begins when the user accesses the plant disease recognition system through the web-based application interface.
- Upload or Capture Leaf Image
In this stage, the user uploads an image of a plant leaf or captures it directly using a device camera through the application interface. This image serves as the input data for the disease detection process.
3Image Preprocessing
Once the leaf image is uploaded, preprocessing techniques are applied to prepare the image for analysis. These operations include resizing the image to the required input dimensions, normalizing pixel values, and converting the image into a format suitable for the deep learning model.
- Disease Prediction
After preprocessing, the prepared image is passed to the trained Convolutional Neural Network (CNN) model. The model analyzes visual features such as leaf color patterns, texture variations, and structural characteristics to classify the plant disease.
- Advisory Module
Following the disease prediction stage, the system activates the advisory module. This module generates recommendations regarding disease treatment, preventive measures, and possible causes of infection. The advisory information is generated using an AI-based system integrated with external APIs.
- Display Results
The final results are presented to the user through the web interface. The output includes the predicted disease name, confidence score,and suggested treatment recommendations. Additional outputs such as text-to-speech audio and downloadable reports may also be provided.
- End
The process ends after the system successfully displays the diagnosis and recommendations to the user. The user can then repeat the process by uploading another leaf image if needed.
I. Home Interface
The home interface serves as the main entry point of the Smart Plant Assistant system. As shown in Fig. 4, the interface displays the title Plant Disease Recognition System and provides a simple interface where users can upload or capture a plant leaf image for disease detection. The sidebar navigation menu allows users to access different sections of the system, including Home, About, Disease Recognition, and Smart Agri Chatbot.The sidebar also includes an option for entering the user’s city to retrieve real-time weather information using an external weather API. This information helps provide environmental context that may influence plant health conditions. The interface is designed using Streamlit to ensure ease of use and accessibility for farmers and agricultural practitioners.
Fig. 4 . Home Interface MODULE
J. About Page
The About page provides information about the purpose and functionality of the proposed system. As illustrated in Fig. 5, this section explains that the system uses a Convolutional Neural Network (CNN) trained on the PlantVillage dataset to detect plant diseases from leaf images.Additionally, the page highlights the integration of the OpenAI API, which is used to generate intelligent treatment recommendations and provide a farming chatbot for user assistance. This section helps users understand how the system works and the technologies involved in plant disease detection.
Fig. 5. About Page MODULE
K. Disease Recognition Interface
The disease recognition interface allows users to upload plant leaf images and obtain predictions from the trained CNN model. As shown in Fig. 6, users can upload an image and click the Predict Disease button to initiate the disease detection process.Once the image is processed, the system displays the predicted disease name along with a confidence score that indicates the reliability of the classification. In the example shown, the system predicts Apple Scab disease with a confidence score of 99.69%.The interface also provides an option to view the recommended solution for the detected disease. These recommendations are generated using the AI advisory module and may include treatment suggestions and preventive measures. This feature helps farmers quickly identify plant diseases and take appropriate action to protect crop health.
Fig. 6 Disease Detection and Prediction Interface MODULE
- Start
- RESULTS AND DISCUSSION
The proposed AI-based plant disease recognition system was evaluated to analyze its effectiveness in identifying plant diseases from leaf images and providing intelligent treatment recommendations. The system integrates a Convolutional Neural Network (CNN) model with a web-based interface to enable real-time disease detection and user interaction.
The CNN model was trained using a labeled dataset containing multiple plant leaf images representing different disease categories as well as healthy leaves.During training, the model learned to recognize visual patterns such a s leaf texture, color variations , and disease spots.
Similar studies have reported high accuracy using deep learning models, demonstrating the effectiveness of AI-based approaches in plant disease detection[3]
After training, the model was tested using unseen leaf images to evaluate its prediction performance. Results from the experiments provide evidence that the Convolutional Neural Network (CNN) model successfully classifies diseases found in plants with high levels of accuracy. The CNNs ability to predict the class of disease as well as provide an associated confidence score (indicating how trustworthy the predicted answer is) builds confidence in the output from this type of mode For example, when a leaf image affected by Apple Scab disease was uploaded, the system correctly identified the disease with a confidence score of 99.69%. This demonstrates the effectiveness of the trained deep learning model in detecting plant diseases from leaf images.
In addition to disease classification, the system provides intelligent treatment recommendations using an AI advisory module integrated with the OpenAI API. The system generates explanations about disease symptoms, possible causes, and recommended treatments, helping farmers take appropriate actions to manage plant health.
The web-based interface developed using Streamlit allows users to easily upload leaf images, view prediction results, and access treatment suggestions. Additional features such as real-time weather information, downloadable PDF reports, and text-to-speech output improve the usability and accessibility of the system.
Overall, the results show that the proposed system is capable of accurately detecting plant diseases and providing useful agricultural guidance. The integration of deep learning and artificial intelligence enables the system to support farmers in early disease diagnosis and effective crop management.
- CONCLUSION AND FUTURE WORK
This research presents an AI-based plant disease recognition and recommendation system designed to assist farmers in detecting plant diseases and obtaining appropriate treatment suggestions. The system utilizes a Convolutional Neural Network (CNN) model to analyze plant leaf images and accurately classify diseases based on visual patterns.
Addressing the multifaceted challenges of AI requires a collaborative effort among researchers, developers, policymakers, and society to create robust, fair, and transparent systems, thereby unlocking the full potential of artificial intelligence.[1]
The proposed system integrates deep learning with a web-based platform developed using the Streamlit framework. Users can upload leaf images, obtain disease predictions along with confidence scores, and receive AI-generated treatment recommendations. Additional features such as a smart agriculture chatbot, real-time weather information, text-to-speech output, and downloadable PDF reports further enhance the functionality of the system. The collected experimental data illustrates the ability of the overall system to accurately identify diseases that affect plants and provide valuable assistance via recommendations on how to maintain plant health. The system will also allow farmers to reduce losses from diseased crops through the ability to detect early stages of a disease and provide more intelligent recommendations for maintaining healthy crops. Future work on the system is expected to further enhance results by increasing the overall number of plant species and disease types that are included in the data set used to develop the model.Additionally, the integration of mobile applications and Internet of Things (IoT) devices could enable real-time field monitoring of crops. The use of advanced deep learning architectures can significantly enhance agricultural diagnostics and redefine how plant disease detection systems are designed[3].
- REFERENCES
- M. Rana, S. Sall, V. S. Bijoor, V. Gaiwad, U. V. Gaikwad, P. Patil, and K. Meher, Obstacles to the Full Realization and Adoption of Artificial Intelligence
(AI), SEEJPH, vol. XXV, 2024.
- A. K. Singh, A. Rao, P. Chattopadhyay, R. Maurya, and L. Singh, Effective plant disease diagnosis using Vision Transformer trained with leafy-generative adversarial network-generated images, Expert Systems with Applications, vol. 240, 2024, Art. no. 124387, doi: 10.1016/j.eswa.2024.124387.
- U. Barman, P. Sarma, M. Rahman, V. Deka, S. Lahkar,
V. Sharma and M. J. Saikia, ViT-SmartAgri: Vision Transformer and Smartphone-Based Plant Disease Detection for Smart Agriculture, Agronomy, vol. 14, no. 2, Art. no. 327, 2024, doi: 10.3390/agronomy14020327.
- W. Liu, Z. Xie, J. Du, Y. Li, Y. Long, Y. Lan, T. Liu, S.
Sun, and J. Zhao, Early detection of pine wilt disease based on UAV reconstructed hyperspectral image, Frontiers in Plant Science, vol. 15, Art. no. 1453761, 2024, doi: 10.3389/fpls.2024.1453761.
- X. Zhang, B. A. Vinatzer, and S. Li, Hyperspectral imaging analysis for early detection of tomato bacterial leaf spot disease, Scientific Reports, vol. 14, Art. no. 27666, 2024, doi: 10.1038/s41598-024-78650-6.
- A. H. Ali, A. Youssef, M. Abdelal, and M. A. Raja, An ensemble of deep learning architectures for accurate plant disease classification, Ecological Informatics, vol. 81, Art. no. 102618, 2024, doi: 10.1016/j.ecoinf.2024.102618.
