DOI : 10.17577/IJERTCONV14IS010047- Open Access

- Authors : Sanjana S.k, Ms. Priyadarshini P, Mr. Hareesh B
- Paper ID : IJERTCONV14IS010047
- Volume & Issue : Volume 14, Issue 01, Techprints 9.0
- Published (First Online) : 01-03-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
An AI-Driven Approach for Automated Car Damage Detection
Sanjana S.K, Ms. Priyadarshini P, Assistant Professor,
Department of Computer Applications
St Joseph Engineering College, Mangalore, Karnataka 575028
Mr. Hareesh B Associate Professor Department of Computer Applications
St Joseph Engineering College Vamanjoor,
Mangalore, Karnataka
Abstract – Traditional methods of evaluating vehicle damage for insurance claims are often manual, slow, and susceptible to errors. This study introduces a comprehensive deep learning-based system designed to automate the processes of identifying vehicle damage, categorizing severity, and estimating repair costs. Utilizing the YOLO object detection framework and a labeled dataset covering various car makes and components, the system ensures precise detection. A visual explanation module is integrated to highlight the affected parts, fostering user confidence in the system's outputs. Experimental results demonstrate high accuracy in identifying damaged areas and generating realistic cost estimations with corresponding severity assessments. This AI-driven solution can enhance the efficiency of claim processing, reduce human bias, and provide meaningful insights to insurance providers, service centers, and vehicle owners.
Index Terms – Vehicle damage detection, YOLO, repair cost prediction, damage severity classification, insurance automation, deep learning.
-
INTRODUCTION
ccurate assessment of automobile damage and repair costs is essential in the automotive sector, especially within insurance and maintenance services. Manual inspection methods are often inefficient, subjective, and inconsistent, leading to processing delays and trust issues. Recent progress in deep learning, particularly with real-time frameworks like YOLO (You Only Look Once) – has enabled faster and more reliable detection of vehicle damage through image analysis.
This paper introduces an AI-driven approach that integrates YOLO with a custom-labeled dataset to automate damage detection, categorize severity, and estimate costs. The result is
a scalable and user-centric system.
A. Relevance of the Project
In India, road accident rates have been increasing steadily. For instance, in 2020, around 366,000 vehicles reported damage, which rose to over 403,000 in 2021, a number expected to rise further. Under such conditions, manual assessments are proving to be insufficient. There is a clear need for smart, automated systems that deliver faster and more accurate evaluations.
The proposed framework bridges this gap by offering a real-time, economical solution powered by deep learning. It minimizes the need for physical inspections, speeds up insurance workflows, and curtails fraudulent claims through visual validations and structured data. Unlike previous systems, it features an intuitive user interface for widespread usability – beneficial to insurers, garages, and car owners alike.
A. Objectives
This research is centered on building a deep learning model that automates car damage detection and repair cost evaluation using image analysis. The key objectives are:
Automated Damage Detection: To develop a YOLO- based model for accurately identifying and localizing damaged vehicle parts from images with minimal computational cost.
Severity Classification: To categorize damages into minor, moderate, or severe based on visual features to assist in cost estimation and prioritization.
Cost Estimation Module: To map detected damages to a structured dataset for brand-, model-, and part-specific cost prediction, enabling part-wise repair estimates.
User Interaction and Customization: To design an interactive interface for uploading images, selecting vehicle details, and visualizing results in real-time.
Performance Evaluation: To assess the models accuracy, speed, and consistency against traditional methods using a curated dataset.
Transparency and Explainability: To provide visual cues that explain the models decisions, enhancing user trust and system interpretability.
Overall, this research aims to deliver a reliable, automated, and cost-effective solution to streamline damage assessment and enhance user satisfaction in insurance and repair services.
-
LITERATURE REVIEW
-
Smarter Vehicle Damage Detection Using an Enhanced YOLO Model [1]
The researchers in this study advanced a smarter solution to detect vehicle damage by extending the YOLOv9 model. They added the SCYLLA loss function which is based on IoU and are paying attention module about damage area by help of Convolutional Block Attention Module (CBAM) to get better model focusing result in damaged spot areas . They also introduced a new Damage Severity Index (DSI) to increase the accuracy of damage assessment.
This model then takes the size of identified damage, i.e., how many cells did it detect are affected; number of things it thinks could pop up (how many problems can a battery have); and its confidence in finding those issues along with computes an index to quantify/represent the amount or severity if detected based on these factors. This model has little improvement in the way of measuring the severity of damage, but this has led to the best way, especially using extensions. In other words, this helped to clarify the process without having a direct assessment of more accurate damage.
-
Slick: A more reasonable way to detect damage caused by a car [2]
Researchers presented Slick as a quick and easy tool for determining car damage using images. They have a general idea of the car in places where damage is likely to occur. This can seem like damage, such as reflections or spots, but it does not. Nevertheless, Slick doesn't go too deep. This does not provide cost repairs for any damages. This method was difficult to determine the damaged area because they did not use objects for object detection or segmentation. They were also unable to provide details regarding the damage or the severity of car parts.
-
Evaluation of cost based on deep learning for vehicle repairs [3]
In this study, once the ML system is trained with labeled images and sensor data involving vehicle age, mileage, etc., it could predict more of when a certain component would fail in the future, emitting warning signals about the upcoming total cost for repairs. Since the method doesn't use any object
detection methods or segmentation steps, it couldn't visually situate where these affected parts are (regardless of what its ultimate purpose is: estimating costs for repairs). It also did not specify the extent of damage and which parts or the car had to be checked. Being that some of those features were missing, the system wasn't as helpful in generating exhaustive reports for insurance needs or analyzing repairs at a granular level.
-
MobileNetV2-Based Approach for Detecting Car Damage [4]
In this study, they used the MobileNetV2 model to detect cracks, bumps and scratches from images. The model quickly identified various types of damage, but struggled to determine the exact location of the damage, its severity, or potential repair costs. Researchers also used CNN to assess costs by studying images, assessing factors such as vehicle age and mileage.
-
Comparison of CNN models for classifying damage [5]
In this article, they have examined various detailed learning models, including MobileNet, EcrutNet and Mask R- CNN, to classify vehicle damage. Despite assessing excellent work, speed, and accuracy, the main attention was only reognized in the identification of the type of damage, without considering the location and how serious it was.
-
-
METHODOLOGY
The proposed system is a modular, AI-powered framework designed to automate car damage detection, classify severity, and estimate repair costs based on vehicle brand and model. It consists of four core components: damage localization using YOLO, severity classification, brand- and model-specific cost estimation, and a user-interactive interface. Each component of the system is discussed in the subsections that follow. It had no real-time, hardly detection at all; nor classification of the severity of alerts -or costs-, and nothing in terms of personalization to different car types, and not even user-facing interfaces.
A. ataset Preparation
The dataset used in this project consists of annotated images of damaged vehicles, organized into seven major vehicle part classes: bonnet, bumper, dickey, door, fender, light, and windshield. Each class includes images where visible damage is labeled and localized using bounding boxes, following the YOLO annotation format. For effective model training and evaluation, the dataset was divided into distinct training and testing sets for each damage category. This ensures that the model is exposed to a wide range of damage types and positions during training while still being validated against unseen data to test generalization.
TABLE I: Description of the dataset
Classes
Train Size
Test Size
Bonnet
172
43
Bumper
284
71
Dickey
153
38
Door
267
66
Fender
198
49
Light
186
46
Windshield
100
27
The doors and bumpers are the parts that frequently suffer damage in car accidents. This is an important part of the dataset. The windshield damage is much less, but it plays a key role in calculating repair costs. This led to the fact that the dataset had enough windshield samples to train the model and find them exactly.
All training samples are either assembled from a dataset or manually marked to ensure high quality. The dataset is well developed to fit into large and diverse conditions of actual conditions. This included a variety of visual angles, car colour, lighting conditions, and accidental damage conditions.
-
Detecting Vehicle Damage with YOLO
Yolo is used to detect damage to the vehicle, which is the key to this system. It is a real time object detection algorithm that can detect and recognize damaged parts, such as vehicles. Unlike traditional detection algorithms like RCNN, Fast- RCNN that first analyzes an image and then looks for regions in the last layer or final step. The YOLO algorithm, scans the whole image at once, sees parts of cars that are already damaged. Instead of scrutinizing the image, it immediately identifies flawed portions through rectangular boxes and pinpoints which part has been damaged (either the door or bumper, or bonnet). It is much faster than older methods because it can find damage in a single scan. For this project, a custom dataset consisting of real-world images showing cars with all types of damage was used for training the YOLOv8 model. Fig 1 illustrates each phase of the image input to output process. This is how it functions:
Heres how it works:
-
Image Upload: User can open the app and uploads a clear image of their damaged car through its user interface. Following this, the system then scans this file in order to detect and classify the damaged part.
-
Model Inference: Upon analyzing an input image, the YOLO model produces bounding boxes accompanied by labels that specify the detected damaged parts.
-
Visualization: Detected parts are displayed using bounding boxes drawn over the image, providing a clear visual representation of the damaged regions.
-
Part Extraction: The class labels of detected parts are passed to the cost and severity estimation modules for further processing.
Fig 1: Detection of Car-damage and cost-estimation framework
This approach enables real-time detection and localization of multiple damaged parts in a single image, significantly reducing the time and effort required for manual assessment. The ability of YOLO to maintain accuracy regardless of image quality varies is one of its primary advantages. This includes active backgrounds, odd angles, and poor lighting. The Ultralytics Python library was implemented to integrate the model for this project. This generated the ability to easily configure the system to meet its specific demands.
As a result, it was simpler to load, modify, and use in a cloud environment based on Python. The location and label of the damage, among the various results generated through YOLO, are then transmitted to the following phases of the system. There, the cost of repairs is estimated, and the level of damage is evaluated. These parts work together to form a completely automated pipeline for evaluating vehicle damage.
-
-
Severity Classification
The technique evaluates the severity of each issue after identifying which parts of the car are damaged. Using this technique, damage is classified into three different types: This type provides a simple presentation that allows you to make faster decisions in the future. A smaller format of the dataset is needed, and find lines equal to the car, brand, and damage model. This project classifies each part identified in the structure according to a rule-based classification method, visualization-based dataset. Depending on the average amount of damage to the brand and model, this dataset gives everyone a given severity level of parts.
If a coincidence is found, the most appropriate level of severity for this dataset is displayed This method ensures that the level of severity corresponds to a particular type of vehicle and component. When the system detects a part, it will perform a lookup using the car brand and model as well as the name of the detected damage.
-
Cost Estimation Based on Vehicle Brand and Model
Cost estimates based on vehicle brands and models is one of the best method in this system. The importance of true accuracy is that, for example, you want to receive an estimated amount to
correctly reflect the damages while presenting an insurance complaint. Furthermore, if you receive estimates from the repair workshop, you will need ideal data that accurately reflects the condition of your car, not just the average. This level of detail, though it may appear overwhelming at first, brings clarity and builds trust in the process. The value estimation is carried out using a well-organized dataset stored in CSV format. For every instance in the dataset, the following data points are recorded:
-
Brand (e.g., Tata, Hyundai)
-
Model (e.g., Nexon, i20)
-
Damaged Part (e.g., bumper, door, light)
-
Severity (minor, moderate, severe)
-
Estimated Repair Cost (in INR)
After a part is identified by the YOLO model and its damage severity is assessed, the system uses a rule-based method to determine repair costs. Before searching for images that correspond to a specific component, the dataset takes into account and chooses a brand. If a match is found, you will receive the type of severity and cost of repair. You can also easily add new car models and eliminate the need for a second expensive regression model.
This often requires a large amount of megadata, but it can be interpreted more.This feature is directly integrated into severity analysis for part detection and the development of transparent systems. This corresponds to the process of assessing the reconstruction of a car. This model can balance accuracy and efficiency.
-
-
User Interaction and System Communication Flow
The main interface through which users interact with the automated vehicle damage detection and repair cost estimation platform in the suggested system is a customized Flutter-based mobile application. Whether you're a car owner, insurance agent or service employee, this mobile application is an important part of making technology accessible to ordinary people. The application acts as a mediator between the user and the repair workshops.
Choose the car brand and car model, load the visual effects of damaged vehicles and get a detailed rating immediately. The system controls data searches and communicates easily with servers working in the backgrounds where artificial intelligence is formed. The mobile application takes into consideration real end users and shows a clean and reactive design with a simple architecture with practical features. The mobile application was created using affordable and simple cells with simple, developed designs for users. The application uses flutter cross-platform performance, which is why it is highly consensus. Users can check which types of repair parts are damaged, as well as check the level of severity. The application also supports booking reservations, giving feedback, and more. Hence, the entire evaluation process is complete.
Functional Overview:
-
Image Capture: Allows users to load images from the gallery.
-
Vehicle Selection: Allows users to select car brands and models via the interface. This input is critical for retrieving brand- and model-specific cost and severity data.
-
Data Submission: Upon image upload and input selection, the user initiates the damage analysis by pressing a submission button. The app sends the selected data via an HTTP POST request to the backend Flask server.
-
Backend Processing: The trained YOLOv8 model is deployed on the server to detect damage, while severity classification and cost data are retrieved from a predefined dataset.
-
Result Display: The mobile app decodes the servers JSON response and displays:
-
An annotated image showing detected damaged parts
-
A list of damaged components with corresponding severity levels and estimated repair costs
-
A total repair cost summary at the bottom of the screen.
-
Fig. 2: Use Case Diagram of mobile user interaction
with the system.
Fig. 2 depicts the interaction sequence between the user and the systems responses. This diagram captures how the mobile user initiates actions such as image upload and vehicle selection, and how these inputs flow into backend processes like detection, cost retrieval, and output generation.
Data Exchange in End-to-End System:
To illustrate the interaction between the user, mobile application, backend server, and machine learning model, a Sequence Diagram is presented (Fig. 3). It outlines the step- by-step communication flow involved in damage detection, cost estimation, and service booking. The sequence begins with the user uploading an image and selecting vehicle details. The submitted data is routed to the Flask backend, where the YOLOv8 model identifies damage and associates it with severity and repair cost based on a structured dataset. The results are returned to the mobile app for display.
Fig. 3. End-to-end sequence of prediction
End-to-End Activity Workflow:
The proposed system integrates deep learningbased damage detection with a complete service and booking workflow (Fig 4). Through the mobile app, users can assess damage, view severity and cost, and book repair services. Upon image upload, the backend uses a YOLOv8 model to detect damaged parts and retrieve severity and cost details from a structured dataset. Results are instantly shown on the mobile app.
Users can then proceed with booking, payment, and receive service updates. Workshops upload reports post-service, and users can provide feedback. This end-to-end flow ensures that the ML module operates as part of a seamless real-world system.
Fig. 4. Activity Diagram showing end-to-end user flow
-
-
Flask API Deployment and Backend Integration
To facilitate seamless communication between the mobile application and the machine learning components, the system is deployed using a Flask-based RESTful backend architecture. This design decouples the front-end logic from the computationally intensive prediction process, allowing the machine learning model to operate server-side while the Flutter mobile application handles user interactions on the client side.
System Workflow:
Upon receiving an image and selected vehicle details from the user via the mobile interface, the Flask server performs the following sequence of operations:
-
Model Invocation: The server loads the trained YOLOv8 model (best.pt) and performs object detection on the uploaded image to identify damaged car parts.
-
Data Mapping: The detected part names are then matched against a structured CSV dataset containing brand- and model-specific repair costs and severity levels.
-
Response Generation: The server compiles the results into a structured JSON response, including:
-
Detected parts
-
Associated severity classification
-
Estimated repair cost for each part
-
The total calculated cost
-
Annotated damage image (encoded in base64)
-
API Structure:
The backend exposes a set of RESTful endpoints to manage various system functions:
/predict: Handles the core damage assessment logic. Accepts POST requests containing an image file and metadata (brand and model). The system then provides outputs including detected damages, severity levels, and estimated repair costs.
/book: Manages service booking requests submitted by users after prediction results.
/feedback: This endpoint collects user input regarding their experience with the service and the systems usability.
TABLE II: Technology Stack
Component
Technology Used
Backend Framework
Flask (Python)
Object Detection
YOLOv8 (Ultralytics)
Data Handling
Pandas
Model Inference
PyTorch, CUDA acceleration
Deployment
Flash
API Communication
HTTP with JSON Payloads
-
-
RESULTS AND EVALUATION
To evaluate the effectiveness of the proposed vehicle damage assessment system, the YOLOv8 model was trained on a custom-labeled dataset comprising annotated images of damaged vehicles across seven part-specific classes: Bumper, Bonnet, Dickey, Door, Fender, Light, and Windshield. The systems performance was evaluated using both quantitative metrics and visual inspection of predictions.
-
Quantitative Evaluation
The model was trained for 50 epochs using the Ultralytics YOLOv8 framework and evaluated on a validation set containing 366 images and 410 instances. The key performance metrics are summarized below:
TABLE III: YOLOv8 Validation Metrics
Metric
Value
Precision (P)
0.876
Recall (R)
0.791
mP@0.5
0.880
mAP@0.5:0.95
0.558
These results demonstrate strong model performance in detecting damaged parts. A high precision score (~88%) indicates that the model generates very few false positives,
while the recall score shows the model can identify a large portion of actual vehicle part damages. The mAP scores confirm that the bounding box predictions are accurate and reliable across various IOU thresholds.
-
Training Analysis
The training analysis provides a comprehensive view of the model's learning progress over 50 epochs. The figure presents multiple key plots that illustrate the YOLOv8 models learning behavior and performance throughout the training process.
Loss Curves: The figure shows three main types of loss
– box loss, objectness loss, and classification loss – each of which steadily decreases over time. This indicates that the model is effectively minimizing error in:
-
Localizing bounding boxes (box loss),
-
Detecting object presence (objectness loss),
-
Correctly assigning class labels (classification loss).
Precision and Recall Trends: Both precision and recall improve across epochs. Precision rising means the model is producing fewer false positives (i.e., its becoming more confident and correct in what it detects). Recall increasing indicates the model is catching more true instances of car parts (i.e., fewer false negatives).
mAP@0.5 and mAP@0.5:0.95: The rising mean Average Precision (mAP) curves show that the model's overall detection accuracy is improving.
-
mAP@0.5 measures how well the model predicts bounding boxes with at least 50% IoU overlap – a good indicator of overall detection performance.
-
mAP@0.5:0.95 is a stricter metric and reflects the models performance across a range of IoU thresholds, showing that the model generalizes well even when precise localization is required.
Fig.5. shows YOLOv8 training metrics over 50 epochs, including loss, precision, recall, and mAP trends.
This analysis (Fig.5) confirms that the model is learning consistently and not overfitting, as training and validation metrics remain stable and improve simultaneously. The trends suggest strong convergence and support the models readiness for real-world deployment in damage detection tasks.
-
-
-
Bounding Box Label Distribution
This plot (Fig.6) provides insight into how labeled vehicle parts are distributed spatially within the dataset. The axes represent normalized values for bounding box center coordinates (x, y), width, and height. The distribution indicates that most annotations are concentrated near the center of the images, which is typical since most car parts (e.g., doors, bumpers) tend to appear centrally in vehicle photos.
Fig.6. Label distribution and correlation of bounding box coordinates
The figure also shows the variation in object sizes – with widths and heights covering a broad range – demonstrating that the model was trained on parts of different scales. This diversity is crucial for helping the model generalize to real-world scenarios, where damage may occur in any region of the vehicle and at varying sizes.
the system has detected specific parts. This visual output demonstrates how the model is able to recognize and isolate different components of a car, even when they are damaged or partially obscured. Such detections are essential for accurately assessing damage and enabling further steps like severity classification and cost estimation.
(a) (b)
Fig.8. Part-wise damage detection. (a) Original image of a damaged vehicle. (b) Detection result using YOLOv8 showing localized damaged parts (bonnet and bumper) with confidence scores
Fig.9. Visual output of damage detection with severity and cost estimation.
The Fig.9 and Fig.10 demonstrates the systems end-to- end functionality. The trained YOLOv8 model identifies the damaged bonnet and bumper from the uploaded vehicle image, with their respective confidence scores. These parts are then mapped to a predefined cost-severity dataset based on the selected vehicle brand and model. The system calculates individual repair costs and severity levels (e.g.,
20,000 for the bonnet marked as Severe, and 14,000 for
Fig.7. Predicted part detections on training images. Bounding boxes indicate detected parts such as bumper, bonnet, and windshield.
The image (Fig.7) displays the results of a part detection model applied to various training images of damaged vehicles. Each vehicle component such as the bumper, bonnet, and windshield, is identified and marked with bounding boxes. These boxes highlight the regions where
the bumper marked as Moderate) and returns the total estimated repair cost of 34,000. This output reflects the practical utility of the system in providing visual proof along with transparent, part-wise cost breakdowns.
-
-
FUTURE WORK
Although the current system demonstrates effective vehicle damage detection and cost estimation, there are several areas where future enhancements can significantly broaden its scope, improve accuracy, and increase scalability.
1) Severity C ssification Using Deep Learning:
In the current implementation, severity levels are assigned based on a predefined rule-based dataset that maps detected parts to static severity labels. While effective, this approach lacks adaptability in real-world scenarios where visual severity can vary. Future developments could involve
training a dedicated deep learning model – such as a CNN classifier – to predict severity levels directly from visual input. This would allow for a more flexible and context- aware classification, accounting for damage size, shape, color distortion, and depth, thereby enhancing precision and reliability.
-
Enhanced Vehicle Coverage:
Up until now, expanded vehicle coverage has been a narrow dataset that is limited by brands and car models. Furthermore, areas such as recruitment of different data using other types of vehicles and higher usage options are needed to enhance the actual utility of the system in future research. Additionally, the model can be generalized by including small brands.
-
Data-D ven Learning and Continuous Enhancement: Managing data and continuously improving the system involves regular interactions with users, focusing on their satisfaction, accurate cost estimates, and reliable damage predictions. Since the model is trained on real world image data, it can quickly adapt to new situations, making it useful for ongoing learning and improvement like refining and repeating updated summaries over time.
-
Deploy nt on the Cloud:
Current server parameters are suitable for demonstrations and small testing, but may or may not be sufficient, if there are tasks that require higher user load/deployment. Movement from a system to a cloud environment can provide high accessibility, load balancing, and evolutionary IT resources.
-
-
CONCLUSION
In this study, technologies are created in the field of artificial intelligence using computers and deep vision. A learning algorithm for automatic detection of vehicle damage to assess repair costs is developed. The proposed system uses a structured dataset to identify portions of damaged vehicles, and uses the developed object detection model to quantitatively assess the level of severity and identify partial repair costs.
Built using YOLOv8, this system leverages a smooth, flat, and interactive mobile platform, allowing end users to assess damage in real time with ease. Using this system, users can benefit from these benefits. It offers a clear approach to amage detection during testing and repair processes, handling demands without depending on manual checks or interventions. The mobile application interface allows users to create a booking, view predictions, and submit photos based on feedback. In practice, the model also showed reliability as indicated by its high detection accuracy. Additionally, the whole architecture with a modular design and RESTful API communications between different services has good extensibility and scalability as well. It has shown huge potential in areas such as insurance claims processing,
garage estimates, and customer self-assessments. In summary, this study lays the groundwork for effective automated damage assessment on vehicles through a user- centric approach, opening new venues of research and development into real-time AI-enabled automotive services.
-
REFERENCES
-
A. K. Maurya, R. Mishra, A. Agrawal, and S. Verma, Vehicle Damage Detection and Insurance Claim Assessment Using Deep Learning, International Journal of Innovative Science and Research Technology (IJISRT), vol. 8, no. 6, pp. 123 – 128, Jun. 2023
-
A. Ravikanth and M. Keerthana, Vehicle Damage Detection and Cost Estimation Using Deep Learning, International Journal of Engineering Research & Technology (IJERT), vol. 11, no. 2, pp. 45 – 48, Feb. 2022.
-
P. Singh, R. Yadav, and A. Chauhan, Vehicle Damage Detection using Deep Learning, International Journal of Scientific Research in Computer Science, Engineering and Information Technology, vol. 9, no. 3, pp. 229 – 234, May 2023.
-
D. Skachkova, M. Hildebrandt, and S. Stieglitz, Automated Damage Assessment of Vehicles Based on Images Using Machine Learning Techniques, Information, vol. 16, no. 2, p. 211, 2023. [Online].
Available: https://www.mdpi.com/2078-2489/16/2/211
-
Ultralytics, YOLOv8: Cutting-edge object detection architecture, [Online]. Available: https://docs.ultralytics.com/
-
Roy & Bhaduri (2023) – "Computer Vision Enabled Damage Detection Model with Improved YOLOv5" Presents "DenseSPH-YOLOv5," integrating DenseNet backbones, CBAM, and a Swin-transformer head to enhance multi-scale damage detection under noisy conditions
-
MDPI (2024) – "Automated Car Damage Assessment Using Computer Vision" Uses an ensemble of YOLOv5 detectors, adjusting thresholds and leveraging multiple depth models to minimize false positives and ensure real-time inference for insurance claim workflows
-
Mohammed et al. (2023) – "Deep Learning Based Car Damage Classification and Cost Estimation" Uses a dual-stage MaskR-CNN pipeline to locate damage area and estimate repair cost, achieving high accuracy, highlighting the shift toward cost-aware models.
-
S. P. Soorya et al. (2020) – Assessing Car Damage Using Mask R- CNN Proposes fine-tuning Mask R-CNN with transfer learning to identify and classify damage types. Demonstrates the effectiveness of instance-level segmentation for downstream severity analysis.
-
R. A. Budo and M. R. S. Rasyid (2024) – Smart Car Damage Assessment Using Enhanced YOLO Algorithm Enhances YOLOv9 with CBAM and advanced loss functions for improved damage detection and proposes a numerical Damage Severity Index (DSI), but stops short of cost estimation.
-
S. Adnan Yusuf et al. (2022) – Automotive Parts Assessment: Applying Real-Time Instance Segmentation Models to Identify Vehicle Parts Compares real-time instance segmentation models like SipMask and Yolact for identifying vehicle parts, a key step towards accurate pricing and severity analysis.
-
A. Neeta Verma et al. (2025) – Car Damage Detection Analysis using Deep Learning, Computer Vision Techniques Discusses classification- based damage detection using CNNs. Important context for understanding severity/ cost mapping pipelines.
-
WA Rao et al. (2023) – Automatic Damaged Vehicle Estimator using Enhanced Deep Learning Algorithm Combines Mask R-CNN and transfer learning for predicting damage severity and estimating cost, yet lacks end-to-end mobile integration
