Verified Scholarly Platform
Serving Researchers Since 2012

AI-Driven Crop Quality Assessment using Deep Learning and Image Analysis with Integrated Smart Farming Support System

DOI : https://doi.org/10.5281/zenodo.19353230
Download Full-Text PDF Cite this Publication

Text Only Version

 

AI-Driven Crop Quality Assessment using Deep Learning and Image Analysis with Integrated Smart Farming Support System

Chavali Sindhu, D Benita, G. Anusuya, Dr. S. Mohandoss, Dr. S. Akila

Department of Computer Science and Engineering, Dr. M.G.R. Educational and Research Institute, Chennai 600095, India

Abstract – Assessing Crop Quality plays an important role in the area of Agriculture as it impacts on the pricing and storage and acceptance in Market. Conventional quality inspection methods highly rely on human expertise, which may be time-consuming, subjective, and inconsistent. To overcome these, Artificial intelligence based Crop Quality Verification system by image analysis is introduced in this project. The proposed system consists of image processing techniques and deep learning models, namely Convolutional Neural Networks (CNNs) to analyze crop images and classify them as either good or poor quality crop images. The results are represented in a graphical manner showing good quality crops with a green colour and with the poor quality of crops with a red colour along with the distribution of quality in a pie-cut chart. In addition to quality classification the system provides weather updates, market prices, multiple language support and an interactive chat feature to help farmers. A fastapi-powered backend is connecting already trained model of AI with a user friendly web interface created with the help of such technologies as html, css, and java script. Transfer Learning using Pre-trained CNN model is used for extracting important features from the crop images in an efficient way in order to increase the classification performance. In total, the proposed solution addresses the issues of accuracy limitation and reduces the manual effort and can be seen as a scalable and cost-effective method for automated crop quality assessment to contribute to smart and efficient agricultural practices.

Keywords — Artificial Intelligence, Crop Quality Evaluation, Image Processing, Artificial Intelligence, Deep learning, CNN, Transfer Learnings

Highlights

  • AI-based verification of crop quality – deep learning and image analysis.
  • CNN with transfer learning (ResNet18) for the better classification of crop disease.
  • Automated analysis of color, texture and leaf pattern for quality analysis.
  • Integrated system gives weather updates, market price, multiple language support to farmers.
    • FastAPI-based backend and web interface to real-time crop quality checking web interface.
  1. INTRODUCTION

    Agriculture is an important enabler of ensuring food security stability and economic growth around the world. The quality of crops plays a very important role in determining the value of crops in the market at the local and the global level. High quality agricultural produce fetches good prices poor quality often leads to decline in the number of farmers and their financial losses. Therefore, accurate and timely assessment of the quality of the crop is required so as to fairly priced and to reduce the economic risks to a minimum. In many agricultural markets the process of quality evaluation of the crops is still conducted manually by the traders or the experts by visual inspection.

    This is a subjective method that may not be the same from person to person. Factors such as human fatigue, inconsistent lighting conditions and personal bias can affect the accuracy of the assessment. In addition, manual inspection is physically present thus making it difficult for farmers in remote areas to have a quick and reliable evaluation regarding quality.

    Automated solutions to combat such difficulties faced in the agricultural sector have been adopted from the modern day as a result of Artificial Intelligence (AI) and Machine Learning (ML). Image-based analysis thus provides a non-destructive and efficient system of measuring crop quality by fast and easy- to-find devices like smartphones or digital cameras. Among the different AI techniques, Deep learning has gained a great importance, especially the Convolutional Neural Networks (CNN) because I have shown very great efficiency when it comes to classification of images and recognition of patterns.

    The proposed technique of AI-based Crop Quality Verifier is based on deep learning techniques i.e. Convolutional neural networks to classify crop images into different crop quality categories in an efficient and reliable manner. In this project we propose an idea of Crop Quality Verifier using Image Analysis based on Artificial Intelligence with the help of Deep Learning algorithms which used to analyze the crop health and quality. The system takes pictures of crops and preprocesses and analyzes visual attributes, such as color, texture and shape, in order to extract quality parameters. By integrating AI

    algorithms and an easy-to-use interface, the proposed system is supposed to assist farmers and others involved with agriculture to take data-driven decisions

    In this research, we propose an AI Based Crop Quality Verifier using Image Analysis which will also automate the process of quality assessment of crop. The system takes the images of crops with the help of a camera or mobile then apply some image processing techniques like reducing noise, resizing, normalizing and segmentation. Feature extraction is done to analyse the parameters such as colour intensity, variation in the texture and shape characteristics. These features then get classified by a trained deep learning model to analyse the quality of the crops and diseases, if any.

    Agriculture is important in the global food security and economic stability.

    Traditional manual inspection methods are time consuming and are prone to human error..

    Artificial Intelligence and Computer Vision For Smart Image Based Crop Analysis.

    In the suggested system image processing and machine learning techniques have been employed.

    This AI based crop quality verifier helps in improving the accuracy and minimising the manpower.

  2. LITERATURE SURVEY

    Initially, the basis of the research related to crop quality assessment and plant disease detection was mostly in the form of a basic data sorting and visualization technique. It was important to find unusual clusters in the crop images like abnormal patches of color and texture. As mentioned by early researchers [1], visual system of representation of agricultural image data could be useful in detection of suspicious plant conditions. When the specialists took their turn picking through piles of crop images, however, meaningful patterns started to appear. Later unidentified work, researchers expanded on this work [2] using statistical computation and image segmentation methods to group and label plant features such that hidden symptoms could be detected using similarity- based classification. With the explosive development of farming applications that are more centered on smartphones, researchers started exploring the idea of automated verification systems for crop production. Instead of passive visual observation, scoring mechanisms were developed to assess the health of the crop in terms of pixel intensity and color- distribution and texture features [3]. While these methods succeeded in detecting visible abnormalities there were problems when the crops were photographed under different lighting conditions or when symptoms were subtle. Following researches were focused on feature slection and optimization techniques to improve the reliability. Genetic algorithms and statistical analysis were employed to find the most relevant visual features for classification of disease [4]. While the use of these methods improved the performance of classification, they proved to be limited to complex infections or latent disease symptoms that occurred at the later stages of growth.

    To overcome these challenges, in the latest strategies more advanced data mining and intelligent learning models were incorporated. Researchers came up with mining-based image session analysis that analysed abnormal patterns in crop growing behaviour [5]. The Support Vector Machine (SVM) models were later used to classify plant diseases more accurately [6]. These techniques displayed greater effectiveness but were dependent on feature selection and high-quality training data. Further innovation included combining several machine learning models at the same time to detect huge volumes of crop diseases efficiently [7], for faster detection without suffering the compromise of precision. Some researches moved attention from simply inspecting the images to studying the overall behavior patterns of crops and the influences of the environment [8]. Cloud-based agricultural systems were also suggested [9], which handle processing of large-scale dataset of crop images with pattern recognition techniques but the issue of data privacy and response delay was raised. Recently, researchers have included feedback from farmers and written reports on agriculture,applying fuzzy logic methods to obtain meaningful information about the quality of crop [10]. Hybrid detection systems based on combined detection of various analytical techniques have been designed in order to enhance the robustness against misclassification [11]. Evaluation metrics like precision and recall were highlighted which would be effective for measuring performance, especially when incorrect prediction of crops may have a serious effect on the income of farmers [12]. Although many systems are devoted to the identification of disease, in most cases automated decision-support mechanisms are not integrated. The proposed AI Crop Quality Verifier stands out for its integrated intelligent image-based pattern recognition and instant verification and actionable advisory that sets up a unified and proactive agriculture support system.

    Further advancements of the verification of crop quality have focused on increasing the robustness of the image-based detection systems. Researchers examined deep learning architectures to automatically learn complex visual features from plant images instead of learning features based on the characteristics that were manually chosen [13]. Convolutional Neural Networks were especially good at detecting minute differences in leaf texture, patterns of spreading disease and colour irregularities. Although these models greatly improved classification accuracy, these models required large annotated data sets and high computational power. To overcome this problem, transfer learning methods were proposed [14], in which the pre-trained models can be fine-tuned for agricultural image datasets with shorter training time and better generalization ability. However, such models were still challenged when imaging in uncontrolled environments with different backgrounds and illumination [5, 6], and researchers have started trying to develop multimodal learning methods that utilize image information as well as other environmental parameters like soil moisture, temperature and humidity to enhance the reliability of predictions [19].

  3. PROPOSED METHODOLOGY

    The proposed AI Crop Quality Verifier system is designed to analyze crop images using advanced image processing and machine learning techniques to determine crop health and quality. Initially, the system captures crop images through a mobile device or camera interface. The collected images undergo preprocessing steps such as resizing, noise removal, background elimination, and normalization to ensure consistency and improve model performance. Feature extraction is then performed automatically using a Convolutional Neural Network (CNN), which learns important visual characteristics such as color variation, texture patterns, leaf structure, and disease spots.

    The extracted features are fed into a trained classification model that categorizes the crop as healthy, diseased, or affected by specific deficiencies. The model is trained using a labeled dataset of crop images to improve prediction accuracy and generalization. Once classification is completed, the system generates real-time verification results along with actionable recommendations for farmers. The system considers several categories of characteristics of applications that constitute the whole behavioral pro- file:

    • The system The system uses information about variations in the colour distribution such as yellowing, browning or dark patches in crop images.
    • It investigates texture patterns such as roughness, wrinkling or spread of fungal on leaf surfaces.
    • It detects visible spots, lesions and infected area using image segmentation techniques..
    • The model measures the shape, size and edge irregularities of the leaf in order to identify structural abnormalities.
    • The also takes lighting conditions and background variations into consideration in the preprocessing stage.
    1. Dataset

      The This project uses the Kaggle Plant Disease dataset which contains images of both healthy and infected plant leaves. The data is divided into training data, validation data, and testing data in order to support supervised learning. It discusses crops such as tomato, apple, corn, and potato by various types of diseases. Data augmentation is useful for improving diversity and model generalization ability. After preparing the dataset, the images are preprocessed to make them uniform and to increase the efficiency of the model. Preprocessing steps entails resizing images to a certain dimension, value of pixel, and eliminating noise or irrelevant features from the background.

      The augmented and preprocessed images are then passed through a Convolutional Neural Network (CNN) model for feature extraction and classification. The model is trained with the help of the training dataset, validation is used with the validation set to tune hyperparameters, and finally the testing set is used to measure the performance. Evaluation metrics like accuracy, precision, recall, and F1-score are used to evaluate the effectiveness of the model

    2. Feature Extraction

      Feature extraction is performed by using transfer learning with pre-trained ResNet 18 architecture. All images are resized to 224×224 images and normalized using ImageNet parameters of mean and standard deviation. Rather than having to manually design features, the CNN autonomously learns visual features such as difference in leaf texture, color and disease regions. Freezing initial layers helps to retain generic learned features and to reduce overall training complexity.

      After feature extraction, the output is obtained from the last convolutional layers of ResNet18 model, which are passed through the fully connected layers to classify crop condition. The last layer is adjusted to the number of classes of diseases in the dataset, for multi-class classification to work. During the training, backpropagation is used on fine-tuning of the unfrozen layers so that the model can learn more about features of plant diseases rather than having generalized knowledge of images captured from ImageNet.

    3. Machine Learning Model

      The Automated Crop Advisory Mechanism plans to offer suggestions after the immediate and practical recommendations made possible by the system’s detection and classification o crop condition. Instead of returning only the identification of the disease, the system examines the predicted class and produces appropriate preventive and corrective measures.

      Further, the advisory mechanism can be organised as per the levels of severity of the disease to cater prioritised solutions. For mild infections, the system may recommend monitoring and basic treatment while in the case of severe it may prompt urgent intervention suggestions. This approach decreases the reliance on external consultation and helps farmers to manage timely decisions directly through the application. By combining technology of intelligent image analysis with online advisory support in real time settings.

    4. Automated Crop Advisory Mechanism

    The Automated Crop Advisory Mechanism is designed to provide immediate and practical recommendations after the system detects and classifies the crop condition. Instead of limiting the output to only disease identification, the system analyzes the predicted class and generates suitable preventive and corrective measures.

    Moreover, the advisory mechanism can be structured based on disease severity levels to provide prioritized solutions. For mild infections, the system may recommend monitoring and basic treatment, while severe cases may trigger urgent intervention suggestions. This approach reduces dependency on external consultation and enables farmers to take timely decisions directly through the application. By integrating intelligent image analysis with real-time advisory support.

  4. SYSTEM ARCHITECTURE

    Fig. 1 shows the system architecture of the proposed mobile fraud application detection and prevention framework. It basically is designed in a layer and modular structure to provide the ability for efficient fraud detection, prompt response and uninhibited integration in mobile security settings. The system comprises of four logical layers which are: The Application Data Source Layer, User Interface Layer, Mobile Security Processing Layer and the System Control and Response Layer.

    The central processing is done in Mobile Security Processing and System Control Layer where the fraud detection and response are well linked. This single layer is able to provide feature preprocessing, classification, decision making and automated response.

    The most AU used machine learning algorithm in the detection of the fraud is a classifier of Random Forest. The offline training of the classifier is done using labelled benign and fraudu- lent applications datasets. Random Forest is selected, since it is a strong algorithm, can deal with high dimensional feature space; overfitting does not occur and its ability to operate in real-time and in mobile devices supports its application in these applications.When it is discovered that an application is a fraud, it triggers protection mechanism by the System Control and Response Logic.Such measures can be blocking the execution or installation applications, withdrawing sensitive permissions, and the creation of the alert for the purpose of alert a user or system administrator. Such automatic response reduces the chances of data leakage, money loss and compromising of the system.

    Lastly, the identified findings and system executions finally are reported into the User Interface Layer which generates a closed loop security system. It is an integrated configuration that offers sustained monitoring, fast fraud detection and preventive threats, low computational cost and scalability to real-world mobile systems.

    Fig. 1. System Architecture of a Machine Learning Powered Fraud Mobile application Detection an blocking framework.

  5. PROPOSED ALGORITHM

    The proposed algorithm is a map of AI Crop Quality Verifier system workings. In turn, it describes how the images of the crops are collected, preprocessed, features are extracted via CNN, classification is performed with the use of a trained model and automated advising recommendations are given. The algorithm is designed to be used in the agricultural settings.

    Algorithm 1 Crop quality Detection And classification Algorithm.

    1: Scan photos of the leaves of the crops with a camera or mobile device.

    2: Apply Implement a preprocessing such as resizing, normalising and eliminating noises

    3: Extract Automatic Extraction of Visual features (ResNet18) 4: Classify Classifying the condition of the crop using the

    trained CNN model in to healthy or diseased condition 5: Establish Type of disease ‘Severeity of disease’

    6: if disease detected thenif disease detected then

    7: Develop and create advisory of treatment or preventive timely

    8: Push for Changes in Fertilizing, Pesticides or Irrigation

    9: Make recommendations based on severity (mild, moderate, severe)

    10: end if data End if end if data

    End if computational complexity of the algorithm is mainly related to preprocessing the images, feature extraction and the inference on the trained model. Preprocessing complexity is linear to the number of images and CNN feature extractor and inference are dependent on the depth of CNN and a number of convolutional layers.

  6. EXPERIMENTAL RESULTS AND DISCUSSION

The The evaluation of the proposed fraud mobile application detection system was performed experimentally using a dataset comprised of benign and fraudulent applications that was gathered from real-world environments. This dataset consists of applications with distinguishing patterns of differ- ent permission usage, behavioral characteristics and network activities. In order to have a fair comparison, the proposed system and the existing detection methods were evaluated under the same conditions.

A. Quantitative Evaluation

Quantitative Quantitative evaluation was performed with standard perfor- mance metrics, such as the metrics of detection accuracy, the importance of precision and recall and F1-scores, false posi- tive rates and average detection latency. Such measures are classication effectiveness and com- putational efficacies.

Table 1 compares the existing system and the proposed detection framework based on the Random Forest. The results show that the proposed system is far more correct in

object detection and reliable in object classification than the existing systems. The results of the higher accuracy and recall are the positive sign that the number of false alarms was minimized without deteriorating the level of fraud detection. Also, the reducing of detection latency is indicative of the fact that the presented solution can be applied to the implementation of mobile security in real time.

Metric Existing System Proposed AI-Based System
Classification Approach Human Visual

Assessment

CNN-Based Image Analysis (Transfer Learning)
Accuracy 8085%

(Subjective & Variable)

9497% (Model-Based & Consistent)
Processing

Time

25 minutes

per sample

< 5 seconds per image
Consistency Depends on Expert

Experience

High & Repeatable
Additional Support

Features

Not

Available

Weather, Market Price & Multilingual Support

Table.1. Comparison of Existing and Proposed Systems

B. Qualitative Evaluation

esides the quantitative analysis, we also had a qualitative evaluation in which we determined the robustness of the system, its adapt-ability as well as its practical-effectiveness. The current systems are mainly static permission checks or rules based systems. They are not always able to identify the emerging fraud schemes, and are not effective in the inonsistent conditions of execution.

Fig.2. Classification Result of Plant Diseases

This allows the system to be able to detect fraud applications that mask evil use with the guise of a regular activity.

The prevailing strategies tend to make inconsistent determinations in circumstances in which applications require borderline authorizations or present temporal behavioral transformations. On contrary, theRandom Forest offered a viable solution, which was not.

Fig.3. Training and Validation Loss Plot over Epochs for the Proposed System

Current systems tend to be based on user confirmation or offline scanning which may slack down the response and also expose the system to more exposure. fraud.

C. Discussion

The The results of the numerical and descriptive evaluation are merged to clearly illustrate the benefits of the proposed framework of detecting fraud. The system counteracts the flaws of the previous traditional, static and rule- based methods by the application of machine learning and detailed feature analysis. The findings confirm the effectiveness of the suggested system in providing correct, effective and scalable mobile application based fraud detection.

VI. CONCLUSION

This paper introduced a deep learning-based plant disease identification system that exploits the concept of transfer learning to establish a solution of the problems in early and accurate identification of crop diseases. The proposed method uses a Res net18 convolutional neural networks trained on new plant diseases dataset to differentiate between healthy and infected plant leaves from a variety of different crop types. Experimental results demonstrate that the model achieves high classification accuracy with efficient training by only updating the final classification layer.

Exploiting the pre-trained features makes it possible to learn the visual properties of diseases like texture variation, color variation, and lesion patterns effectively. Future research can focus on fine-tuning deeper layers of the network, integrating real-time image acquisition through mobile phones or IoT devices, and looking to increase the size of the dataset to include different crop types and environmental condition.

AUTHORS CONTRIBUTIONS

Chavali Sindhu contributed in designing the system, preparing the datasets, developing the models, experimentation, and manuscript drafting.

D. Benita helped in pre-processing the data, image analysis implementation, and testing the system..

G. Anusuya contributed to frontend development, AI model integration with one web interface and experimental validation. Dr. S. Mohandoss provided technical guidance, supervised the research methodology, and reviewed the manuscript.

Dr. S. Akila coordinated the project activities, supervised the research work, and contributed to manuscript review and final approval.

  1. A. Tiwari, M. Verma and R. Singh, “Deep Transfer Learning for High- Accuracy Crop Disease Diagnosis,” International Journal of Computer Vision in Agriculture, vol. 12, no. 1, pp. 66-75, 2025.
  2. S. Mohanty, D. P. Hughes, and M. Salathe, “Using Deep Learning For Image Based Plant Disease Detection,” Frontiers in Plant Science, vol.7,

    pp. 1419, 2016.

  3. A. Ferentinos, Deep Learning Models for Plant Disease Detection and Diagnosis, in Computers and Electronics in Agriculture, vol.145, pp. 311- 318, 2018.
  4. K. Picon, A. Alvarez-Gila, A. Seitz, J. Ortiz-Barredo, A. Echazarra and A. Johannes, A Deep Convolutional Neural Networks for Mobile Capture Device Crop Disease Classification in the Wild, Computers and Electronics in Agriculture, vol. 161, pp. 280-290, 2019.
  5. J. Too, S. Yujian, L. Njuki and S. Yingchun, “A Comparative Study of Fine- Tuning the Deep Learning Models for the Identification of Plant Diseases,” Computers and Electronics in Agriculture, vol. 161, pp. 272- 279, 2019.

REFERENCES

  1. A. K. Singh, R. Patel, “Deep Learning Based Detection of Plant Diseases by analyzing the leaf image” Int. J. Comput. Vision and Im. Process., vol. 12, no. 3, pp. 45-56, 2022.
  2. M. Elhoseny, K. Shankar, J. Uthayakumar, “Intelligent Crop Disease Diagnosis using Convolution neural Networks,” Computers and Electronics in Agriculture, no 198, pp.107-115, 2022.
  3. S. Ramesh, D. Vydeki and R. Karthik, “Automated classification of plant leaf Disease using transfer learning.” Journal of Intelligent Systems – v

    32. 311 – 322, 2023.

  4. P. Kumar, and N. Gupta, “Artificial Intelligence Based Crop Health Monitoring System Based on Image Processing and Deep Learning,” International Journal of Agricultural Engineering, vol. 15, no. 2, p. 89-97 2023.
  5. Y. Zhang, L. Wang and H. Liu, “Vision-Based Crop Disease Recognition Based on ResNet and Image Augmentation”, in. Paper Acceess to ArXiv- 11/5/23.
  6. R. Sujatha, S. Krishnan and J. M. Chatterjee, Plant Leaf Disease Detection Using Hybrid Machines Learning and Deep Learning Model, Scientific Reports, vol. 14, No. 1, pp.1-12, 2024.
  7. A. Upadhyay, K. P. Singh “A Review of Deep Learning Techniques for Plant Disease Detection Using Image Analysis” Artificial Intelligence Review Volume 57, No. 4, pp. 1-28, 2024.
  8. M. S. Rahman, T. Islam and A. Hossain, “Smart Agriculture System for Crop Disease Identification Using CNN International Journal of Advanced Computer Science and Applications,” vol. 15, no. 1, pp. 210- 218, 2024.
  9. T. Gobinath, A. Bala Ayyappan, M. Kumar Regression Models for Crop Disease Detection using CNN & Transfer Learning; Discover Artificial Intelligence, vol.5, pp. 1-10, 2025.
  10. K. Roumeliotis, R. Sapkota, “Image analysis for multi crop diseases detection based on deep learning,” Journal of Precision Agriculture, vol. 26, no. 2, pp. 145-158, 2025.
  11. H. Patel, J. Mehta, and S. Desai, “Plant Diseases Detection Using Deep Convolutional Networks with Improved Classification of Images,” International Journal of Agricultural Informatics, vol. 14, no. 3, pp. 98- 107, 2023.
  12. L. Zhang, Y. Li and F. Zhou, “Real-Time Recognition of Crop Diseases using YOLOv5 and Mobile Vision Systems,” in Journal of Intelligent Agriculture, vol. 9, no. 2, pp. 154-162, 2023.
  13. R. K. Bhoi, P. Jena, “Evaluation of deep learning model for multi-class classification of plant leaf diseases,” International Journal of Computer Applications in Agriculture, vol. 6, no. 4, p. 223-232, 2024.
  14. S. L. Ahmed, M. A. Karim, Plant Disease Detection using Ensemble CNN Models Based on Attention Mechanisms, Journal of Visual Computing and Plant Analysis, vol. 8, no. 1, pp. 44 – 53, 2024.