

- Open Access
- Authors : L.Indhu, Dr. S. M. Kulkarni
- Paper ID : IJERTV14IS040271
- Volume & Issue : Volume 14, Issue 04 (April 2025)
- Published (First Online): 28-04-2025
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
Intelligent Surveillance System for Helmet and Rider Compliance
L. Indhu
Student at E&TC Dept, PVPIT, Bavdhan, Savitribai Phule Pune University, Pune, Maharashtra
Dr. S. M. Kulkarni
H.O.D., E&TC Dept, PVPIT Bavdhan, Savitribai Phule Pune University, Pune, Maharashtra
Abstract- With the rising number of motorcycles in India, safety concerns have intensified. Studies show that riders without helmets are at a 2.5 times higher risk of fatal accidents compared to those who wear helmets. Traditional video surveillance requires considerable human supervision, leading to fatigue and bias. This paper proposes an automated system using YOLOv3 for helmet detection and a binary image projection technique for triple riding detection. Real-world challenges like occlusion, environmental variations, and low-quality video are addressed. The results demonstrate the system's efficacy in real-time surveillance scenarios.
Keywords: YOLOv3, Object Detection, Helmet Detection, Convolutional Neural Networks, Rider Compliance
-
INTRODUCTION
Motorcycles are a widely used means of transportation globally, but they pose significant safety risks. According to the World Health Organization (WHO), motorcyclists represent 28% of global road traffic deaths [1]. Head injuries remain the leading cause of motorcycle fatalities, with helmet use shown to reduce the risk by 69% and death by 42%. A report by Delhi Traffic Police (2019) indicated that 35-40% of motorcycle fatalities were linked to improper helmet use [2]. Traditional traffic monitoring systems heavily depend on human surveillance, prone to errors and fatigue. The evolution of AI and computer vision has provided novel
approaches to automate surveillance and reduce reliance on human monitoring.
However, challenges like occlusion, motion variability, lighting conditions, and low-resolution videos make helmet and rider detection difficult. Recent advancements in object detection, especially YOLOv3, offer real-time and accurate solutions [3]. This paper proposes a robust, intelligent system to detect helmet compliance and identify triple riding violations.
-
EXISTING WORK
Several techniques have been proposed for helmet detection. Shape-based methods using Hough Circle Transform were among the earliest, focusing on identifying helmet contours. Feature-based methods using Histogram of Oriented Gradients (HOG) and Scale-Invariant Feature Transform (SIFT) further enhanced recognition capabilities.
With the advent of machine learning, Support Vector Machines (SVM) and Haar cascades were adopted to increase classification accuracy. More recently, deep learning models such as YOLOv5 and YOLOv8 have been explored, offering enhanced speed and accuracy in object detection [4], [5]. Despite advancements, few approaches addressed triple riding detection, which is critical for traffic safety enforcement in countries like India.
-
PROPOSED TECHNIQUES
This work uses YOLOv3 for helmet detection and a binary image projection technique for rider counting.
-
System Design
-
Data Collection: Images and videos from CCTV and public datasets.
-
Pre-processing: Frame extraction, resizing, and annotation
-
Training: YOLOv3 trained to detect helmets, bikes, and riders.
-
Detection: Objects detected in real-time with bounding boxes.
-
Rider Counting: Binary image projection used to count riders based on pixel density analysis.
Fig.1.Workflow of the Proposed Helmet Detection System
-
-
Methodology
The proposed system follows a structured pipeline comprising data collection, pre-processing, model training, object detection, and rider counting to achieve accurate helmet and triple riding detection in real-time scenarios.
-
Data Collection:
To ensure robust and diverse model training, image and video data were sourced from both live CCTV footage and publicly available traffic surveillance datasets. This hybrid approach allowed the inclusion of varied traffic conditions, rider orientations, and lighting environments, thereby enhancing the model's generalization capabilities across different urban settings.
-
Pre-processing:
Raw video inputs were processed to extract individual frames at defined intervals, which were subsequently resized to standard dimensions suitable for the YOLOv3 network. Manual annotation was performed to label objects of interest, including riders, helmets, and motorcycles. This step was crucial for supervised learning, ensuring the model receives accurate ground truth data during training.
-
Model Training:
The annotated dataset was used to fine-tune the YOLOv3 model, which is known for its balance of speed and accuracy in object
detection tasks. The network was specifically trained to identify key features associated with helmets, bikes, and individual riders. Transfer learning was employed to leverage pre-trained weights, thereby reducing training time and improving convergence.
-
YOLOv3 Advantages
YOLOv3 processes images at 45 FPS, enabling real- time surveillance with minimal computational load. It predicts multiple bounding boxes per grid cell, improving localization and classification accuracy.
Fig.2. Workflow of Yolo scheme
-
Rider Counting
From prior stages, nearby outcomes for example regardless of whether a two-wheeler rider is using a helmet or not using is taken at some stage in that aspect. Be that as it may, till now the association between consistent casings is dismissed. Along these lines, as to downsize bogus alerts, then merge nearby outcomes. This included first the detection of a bike and afterwards, the individual. After the identification of the bike just the location of the helmet was done on the rider utilizing YOLO. The heads with and without helmets were separated and exhibited in various shaded bounding boxes as shown in fig.3.
Fig.3. Detection of Helmet and No Helmet Riders
To detect triple riding, we applied projection and morphological operations to count the number of heads detected above the motorcycle bounding box.
Fig.4. Detection of Triple riders
-
-
RESULTS AND ANALYSIS
The system was tested on 500+ real-world CCTV footages. Detection accuracy was around 92% for helmet compliance and 89% for triple riding cases
Metric
Helmet
Detection (%)
Triple Riding
Detection (%)
Accuracy
92
89
Precision
90
87
Recall
91
88
The proposed intelligent surveillance system demonstrates robust performance in both helmet detection and triple riding detection tasks. Quantitative evaluation reveals that the helmet detection module achieves an accuracy of 92%, precision of 90%, and recall of 91%, indicating a high level of consistency and correctness in identifying riders wearing helmets. Similarly, the triple riding detection module reports an accuracy of 89%, precision of 87%, and recall of 88%, reflecting reliable detection of instnces involving more than two riders. These results validate the effectiveness of the implemented YOLOv3-based object detection framework, showcasing its suitability for real-time traffic monitoring and law enforcement applications. The slight variation in metrics between the two detection tasks may be attributed to the increased complexity in identifying triple riding scenarios, which often involve occlusions and variations in rider positioning. Overall, the system exhibits promising accuracy and generalization capability, making it a viable solution for enhancing road safety compliance in urban environments.
Fig.5. Performance Metrics Comparison
Fig. 6. Real-time results of helmet and triple riding detection
-
CONCLUSION
This paper introduces an AI-driven intelligent surveillance system designed to enforce road safety by detecting helmet compliance and identifying rider violations such as triple riding. Utilizing the YOLOv3 object detection algorithm, the system achieves high accuracy, precision, and recall, demonstrating its potential for reliable real-time deployment. The proposed framework significantly reduces the dependency on human monitoring by automating the detection process, thereby increasing operational efficiency and consistency in traffic surveillance.
In addition to its robust performance under typical daytime conditions, the system architecture is scalable and adaptable for further improvements. Future work will focus on integrating advanced features such as automatic license plate recognition (ALPR) to enable direct identification of violators, support for low-light and night-time detection using infrared or thermal imaging, and training on more diverse and larger datasets to improve generalization across different geographic regions and traffic conditions. By addressing these aspects, the proposed solution can evolve into a comprehensive tool for enhancing road safety and supporting law enforcement agencies in maintaining compliance with traffic regulations.
-
REFERENCES
-
World Health Organization, "Global Status Report on Road Safety 2023," WHO, 2023.
-
Delhi Traffic Police, "Annual Report 2019," 2019.
-
Sumanta Chatterjee et al., "Next-Generation Helmet Detection: A Real- Time Approach," ESR Journals, Nov. 2024.
-
Vo et al., "Robust Motorcycle Helmet Detection in Real-World Scenarios," CVPRW, July 2024.
-
AI-Based Helmet Violation Detection for Traffic Management System, ScienceDirect, July 2024.
-
Real-Time Automatic Detection of Motorcycle Helmet Based on Improved YOLOv8 Algorithm, ESR Groups, Sep. 2023.