Tracking and Detection System for Law Enforcement of Helmet Wearing

DOI : 10.17577/IJERTCONV8IS08012

Download Full-Text PDF Cite this Publication

Text Only Version

Tracking and Detection System for Law Enforcement of Helmet Wearing

1Dr. Ramar. K

Dean/R&D Muthyammal Engineering College,

Rasipuram, Tamilnadu.

2Nithishkumar. K

IV Year/CSE Muthyammal Engineering College,

Rasipuram, Tamilnadu.

3 Raj Chakravarthy. A.V

IV Year/CSE Muthyammal Engineering College,

Rasipuram, Tamilnadu.

4Venkatesh. G

IV Year/CSE Muthyammal Engineering College,

Rasipuram, Tamilnadu.

Abstract: Two-wheeler is a very popular mode of transportation in almost every country. However, there is a high risk involved because of less protection. To reduce the involved risk, it is highly desirable for bike-riders to use helmet. Observing the usefulness of helmet, Government has made it a punishable offense to ride a bike without helmet and have adopted manual strategies to catch the violators. However, the existing video surveillance based methods are passive and require significant human assistance. In general, such systems are infeasible due to involvement of humans, whose efficiency decreases over long duration. Automation of this process is highly desirable for reliable and robust monitoring of these violations as well as it also significantly reduces the amount of human resources needed. Also, many countries are adopting systems involving surveillance cameras at public places. So, the solution for detecting violators using the existing infrastructure is also cost-effective. The proposed approach first detects bike riders from surveillance video using background subtraction and object segmentation. Then it determines whether bike-rider is using a helmet or not using visual features and binary classifier. Also, we present a consolidation approach for violation reporting which helps in improving reliability of the proposed approach.

Keywords: Unmanned Aerial Vehicle (UAV), Drone Communication, Machine Learning.


    In real life scenarios, the dynamic objects usually occlude each other due to which object of interest may only be partially visible. Segmentation and classification become difficult for these partially visible objects.

    Direction of Motion: 3-dimensional objects in general have different appearance from different angles. It is well known that accuracy of classifiers depends on features used which in turn depends on angle to some extent. A reasonable example is to consider appearance of a bike- rider from front view and side view.

    Temporal Changes in Conditions: Over time, there are many changes in environment conditions such as illumination, shadows, etc. There may be subtle or

    immediate changes which increase complexity of tasks like background modeling.

    Quality of Video Feed: Generally, CCTV cameras capture low resolution video. Also, conditions such as low light, bad weather complicate it further. Due to such limitations, tasks such as segmentation, classification and tracking become even more difficult. As stated in, successful framework for surveillance application should have useful properties such as real-time performance, fine tuning, robust to sudden changes and predictive. Keeping these challenges and desired properties in mind, we propose a method for automatic detection of bike- riders without helmet using feed from existing security cameras, which works in real time.


    Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task. Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task.

    The result of the paper is organized as follows. Section II designer the literature survey. Section III designer the system requirement. Section IV designer the experimental setup. Section V designer the description of proposed module and section VI conclusion the paper.


    Robust Real-Time Unusual Event Detection Using Multiple Fixed-Location Monitors Method [1]

    AUTHORS: Amit Adam, Ehud Rivlin, David Reinitz The algorithm is based on multiple local monitors

    which collect low-level statistics. Each local monitor produces an alert if its current measurement is unusual and these alerts are integrated to a final decision regarding the existence of an unusual event. Our algorithm satisfies a set of requirements that are critical for successful deployment of any large-scale surveillance system. In particular, it requires a minimal setup (taking only a few minutes) and is fully automatic afterwards. Since it is not based on objects tracks, it is robust and works well in crowded scenes where tracking-based algorithms are likely to fail. The algorithm is effective as soon as sufficient low-level observations representing the routine activity have been collected, which usually happens after a few minutes. Our algorithm runs in real-time. It was tested on a variety of real-life crowded scenes A ground-truth was extracted for these scenes, with respect to which detection and false-alarm rates are reported.

    Real-Time On-Road Vehicle and Motorcycle Detection Using a Single Camera Method [2]

    AUTHORS: Bobo Duan, Wei Liu, Pengyu Fu, Chunyang Yang, Xuezhi Wen

    A real-time monocular vision based rear vehicle and motorcycle detection and tracking approach is presented for Lane Change Assistant (LCA). To achieve robustness and accuracy this work detects and tracks multiple vehicles and motorcycles on road by combining multiple cues. To achieve real-time multi-resolution technology is used to reduce computing complexity, and all algorithms have been implemented on an IMAP (Integrated Memory Array Processor) parallel vision board. The test results under various traffic scenes illustrated accuracy, robustness and real-time of this work.

    In recent years, detecting on-road vehicles base on a vision sensor is becoming an area of active research on automotive driver assistance system(DAS), and there are so many researches about real-time detecting overtaking vehicles which can be found in a review. In fact, besides vehicle motorcycle is also one kind of important on-road object which needs to be detected for DAS application, especially for lane change assistance. But there are few researches about detecting on-road overtaking vehicle and motorcycle in same system. Even there are few about detecting on-road overtaking motorcycle only, most of current researches about motorcycle detection are base on a fixed camera for traffic monitoring. There are huge challenges for real- time detecting vehicle and motorcycle in one system. In general, robustness and real-time are main challenges. Robustness means the system should keep high detection performance in complex and widely varying environments.

    Helmet presence classification with motorcycle detection and tracking Method [3]

    AUTHOR: J. Chiverton

    Helmets are essential for the safety of a

    motorcycle rider, however, the enforcement of helmet wearing is a time consuming labor intensive task. A system for the automatic classification and tracking of motorcycle riders with and without helmets is therefore described and tested. The system usessupport vector machines trained on histograms derived from head region image data of motorcycle riders using both static photographs and individual image frames from video data. The trained classifier is incorporated into a tracking system where motorcycle riders are automatically segmented from video data using background subtraction. The heads of the riders are isolated and then classified using the trained classifier. Each motorcycle rider results in a sequence of regions in adjacent time frames called tracks. These tracks are then classified as a whole using a mean of the individual classifier results. Tests show that the classifier is able to accurately classify whether riders are wearing helmets or not on static photographs. Tests on the tracking system also demonstrate the validity and usefulness of the classification approach.

    A few systems have previously been proposed that include the detection of helmets as part of some other system goal used helmet detection as an indicator of whether a motorcycle was present in a foreground region of the image data. Their technique relied on the use of a vertical histogram projection of the silhouette of the moving object to identify the location of the head of the rider. Then edges were detected and accumulated to determine if a circular object was present in the head region. The presence of a circular object was then used as an indicator of the presence of a helmet.

    A Survey on Visual Surveillance of Object Motion and Behaviors Method [4]

    AUTHORS: Weiming Hu, Tieniu Tan

    Visual surveillance in dynamic scenes, especially for humans and vehicles, is currently one of the most active research topics in computer vision. It has a wide spectrum of promising applications, including access control in special areas, human identification at a distance, crowd flux statistics and congestion analysis, detection of anomalous behaviors, and interactive surveillance using multiple cameras, etc. In general, the processing framework of visual surveillance in dynamic scenes includes the following stages: modeling of environments, detection of motion, classification of moving objects, tracking, understanding and description of behaviors, human identification, and fusion of data from multiple cameras. We review recent developments and general strategies of all these stages. Finally, we analyze possible research directions, e.g., occlusion handling, a combination of two and three-dimensional tracking, a combination of motion analysis and biometrics, anomaly detection and behavior prediction, content-based retrieval of surveillance videos, behavior understanding and natural language description, fusion of information from multiple sensors, and remote surveillance.

    The aim is to develop intelligent visual

    surveillance to replace the traditional passive video surveillance that is proving ineffective as the number of cameras exceeds the capability of human operators to monitor them. In short, the goal of visual surveillance is not only to put cameras in the place of human eyes, but also to accomplish the entire surveillance task as automatically as possible.

    Text Recognition from Images Method [5] AUTHORS: Pratik Madhukar Manwatkar

    Text recognition in images is a research area which attempts to develop a computer system with the ability to automatically read the text from images. These days there is a huge demand in storing the information available in paper documents format in to a computer storage disk and then later reusing this information by searching process. One simple way to store information from these paper documents in to computer system is to first scan the documents and then store them as images. But to reuse this information it is very difficult to read the individual contents and searching the contents form these documents line-by-line and word-by-word. The challenges involved in this the font characteristics of the characters in paper documents and quality of images. Due to these challenges, computer is unable to recognize the characters while reading them. Thus there is a need of character recognition mechanisms to perform Document Image Analysis (DIA) which transforms documents in paper format to electronic format. In this paper we have discuss method for text recognition from images. The objective of this paper is to recognition of text from image for better understanding of the reader by using particular sequence of different processing module.



    Anaconda Navigator is a desktop graphical user interface (GUI) included in Anaconda distribution that allows you to launch applications and easily manage conda packages, environments, and channels without using command-line commands. Navigator can search for packages on Anaconda Cloud or in a local Anaconda Repository. It is available for Windows, macOS, and Linux.


    In order to run, many scientific packages depend on specific versions of other packages. Data scientists often use multiple versions of many packages and use multiple environments to separate these different versions.

    The command-line program conda is both a package manager and an environment manager. This helps data scientists ensure that each version of each package has all the dependencies it requires and works correctly.

    Navigator is an easy, point-and-click way to

    work with packages and environments without needing to type conda commands in a terminal window. You can use it to find the packages you want, install them in an environment, run the packages, and update them all inside Navigator.


    JupyterLab Jupyter Notebook Spyder

    VSCode Glueviz Orange 3 App RStudio


    The simplest way is with Spyder. From the Navigator Home tab, click Spyder, and write and execute your code. You can also use Jupyter Notebooks the same way. Jupyter Notebooks are an increasingly popular system that combine your code, descriptive text, output, images, and interactive interfaces into a single notebook file that is edited, viewed, and used in a web browser.


    Windows Store: Official app store for Metro-style apps on Windows NT and Windows Phone. As of Windows 10, it distributes video games, films and music as well. Windows Phone Store: Former official app store for Windows Phone. Now superseded by Windows Store. Xbox Live: A cross-platform video game distribution platform by Microsoft. Works on Windows NT, Windows Phone and Xbox. Initially called Games for Windows Live on Windows 7 and earlier. On Windows 10, the distribution function is taken over by Windows Store.


    Apk-tools (apk): Alpine Package Keeper, the package manager for Alpine Linux. dpkg: Originally used by Debian and now by Ubuntu. Uses the .deb format and was the first to have a widely known dependency resolution tool, APT. The ncurses-based front-end for APT, aptitude, is also a popular package manager for Debian-based systems. Entropy: Used by and created for Sabayon Linux. It works with binary packages that are bzip2-compressed tar archives that are created using Entropy itself, from tbz2 binaries produced by Portage. Flatpak: A containerized/sandboxed packaging format previously known as xdg-app.


    Mac App Store: Official digital distribution platform for OS X apps. Part of OS X 10.7 and available

    as an update for OS X 10.6.Homebrew: Package manager for OS X, based on Git. Fink: A port of dpkg, it is one of the earliest package managers for OS X.MacPorts: Formerly known as DarwinPorts, based on FreeBSD Ports (as is OS X itself).Joyent: Provides a repository of 10,000+ binary packages for OS X based on pkgsrc.


    Conda is an open source, cross-platform, language-agnostic package manager and environment management system that installs, runs, and updates packages and their dependencies. It was created for Python programs, but it can package and distribute software for any languge including multi-language projects. The conda package and environment manager is included in all versions of Anaconda, Miniconda, and Anaconda Repository.


    Anaconda Cloud is a package management service by Anaconda where you can find, access, store and share public and private notebooks, environments, and conda and PyPI packages. Cloud hosts useful Python packages, notebooks and environments for a wide variety of applications. You do not need to log in or to have a Cloud account, to search for public packages, download and install them. You can build new packages using the Anaconda Client command line interface (CLI), then manually or automatically upload the packages to Cloud.


    This project presents the proposed approach for real-time detection of bike-riders without helmet which works in two phases. In the first phase, we detect a bike- rider in the video frame. In the second phase, we locate the head of the bike-rider and detect whether the rider is using a helmet or not. In order to reduce false predictions, we consolidate the results from consecutive frames for final prediction. The block diagram in Fig. 1 shows the various steps of proposed framework such as background subtraction, feature extraction, object classification using sample frames. As helmet is relevant only in case of moving bike-riders, so processing full frame becomes computational overhead which does not add any value to detection rate. In order to proceed further, we apply background subtraction on gray-scale frames, with an intention to distinguish between moving and static objects.

    Environment conditions like illumination variance over the day, shadows, shaking tree branches and other sudden changes make it difficult to recover and update background from continuous stream of frames. In case of complex and variable situations, single Gaussian is not sufficient to completely model these variations [10]. Due to this reason, for each pixel, it is necessary to use variable number of Gaussian models. Here K, number of Gaussian components for each pixel is kept in between 3 and 5, which is

    determined empirically.

    Fig 1. Proposed approach for detection of bike-riders without helmet

    Next, we present steps involved in background modeling. Background Modeling: Initially, the background subtraction method in [9] is used to separate the objects in motion such as bike, humans, cars from static objects such as trees, roads and buildings. However, there are certain challenges when dealing with data from single fixed camera.


    1. Background Subtraction

      Initially, the background subtraction method in is used to separate the objects in motion such as bike, humans, cars from static objects such as trees, roads and buildings. However, there are certain challenges when dealing with data from single fixed camera. Environment conditions like illumination variance over the day, shadows, shaking tree branches and other sudden changes make it difficult to recover and update background from continuous stream of frames. In case of complex and variable situations, single Gaussian is not sufficient to completely model these variations. Due to this reason, for each pixel, it is necessary to use variable number of Gaussian models.

    2. Detection Bike-riders

      This module separates the bike riders or two wheelers from the video frame. In this module first from the video frame the image is converted to gray scale then background subtraction is done to extract the riders. This phase involves two steps : feature extraction and classification. Feature Extraction : Object classification requires some suitable representation of visual features.

      In literature, HOG, SIFT and LBP are proven to be efficient for object detection. Classification: After feature extraction, next step is to classify them as bike- riders vs other objects. Thus, this requires a binary classifier. Any binary classifier can be used here, however we choose SVM due to its robustness in classification performance even when trained from less number of feature vectors. Also, we use different kernels such as linear, sigmoid (MLP), radial basis function (RBF) to arrive at best hyper-plane.

    3. Detection of Bike-riders Without Helmet

    After extracting the rider the next step is to check whether the rider is wearing Helmet or not. The upper half of the rider is extracted then is it's compared with trained data set to ensure that the rider is wearing Helmet or not. After the bike-riders are detected in the previous phase, the next step is to determine if bike rider is using a helmet or not. Usual face detection algorithms would not be sufficient for this phase due to following reasons: Low resolution poses a great challenge to capture facial details such as eyes, nose, mouth. Angle of movement of bike may be at obtuse angles. In such cases, face may not be visible at all. So proposed framework detects region around head and then proceed to determine whether bike-rider is using helmet or not. In order to locate the head of bike-rider, proposed framework uses the fact that appropriate location of helmet will probably be in upper areas of bike rider.

    1. Vehicle number extraction

      After the confirmation of absence of helmet the algorithm will extract the vehicle number from the lower half of extracted rider image. Application of image processing called pattern recognition make easy to recognize text from multimedia documents. A pattern can be fingerprint image, handwritten word sample, human face images, speech signal and DNA sequence etc or we can say that all pattern are in machine editable form. After extraction of feature we provide training to extreme learning machine. Along this feature extraction technique we use feed forward network as a classifier and convolution neural network for feature extractor. Text can be recognized with and without segmentation of character. Segmentation can be line, word or character level and without segmentation character is recognized from whole text image.

      Fig 2. Flow Diagram after detection of bike rider without helmet

      Character recognition is a field of research and various research has been done in the area of pattern recognition. There we use a new technique called diagonal based feature extraction in last layer of convolution neural network and make feature extraction easy with the help of genetic algorithm. It is a deep learning based technique of neural network which use for classification or recognition of text. This is basically used for providing training and in testing phase. Storing the result in database and Warning or Fine generation

      This module stores the vehicle number in the separate database for future reference for fine generation. Using the vehicle number the warning is sent to the owner of the bike and if the actions are repeated then a fine is generated according to the procedures.


This project has developed to process tracking and detection; bike passenger who have not use helmet while riding for his safety we develop the modules executed the system requirement, experimental setup, description of proposed modules. In this phase we have gone through all the concepts required for successful implementation of our project.


    1. A. Adam, E. Rivlin, I. Shimshoni, and D. Reinitz, Robust real-time unusual event detection using multiple fixed- location monitors, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 3, pp. 555560, March 2008.

    2. B. Duan, W. Liu, P. Fu, C. Yang, X. Wen, and H. Yuan, Real-time onroad vehicle and motorcycle detection using a single camera, in Procs. of the IEEE Int. Conf. on Industrial Technology (ICIT), 10-13 Feb 2009, pp. 16.

    3. C.-C. Chiu, M.-Y. Ku, and H.-T. Chen, Motorcycle detection and tracking system with occlusion segmentation, in Int. Workshop on ImageAnalysis for Multimedia Interactive Services, Santorini, June 2007, pp. 3232

    4. W. Hu, T. Tan, L. Wang, and S. Maybank, A survey on visual surveillance of object motion and behaviors, IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 34, no. 3, pp. 334352, Aug 2004.

    5. J. Chiverton, Helmet presence classification with motorcycle detection and tracking, Intelligent Transport Systems (IET), vol. 6, no. 3, pp. 259269, September 2012.

    6. Z. Chen, T. Ellis, and S. Velastin, Vehicle detection, tracking and classification in urban traffic, in Procs. of the IEEE Int. Conf. on Intelligent Transportation Systems (ITS), Anchorage, AK, Sept 2012, pp. 951956.

    7. R. Silva, K. Aires, T. Santos, K. Abdala, R. Veras, and A. Soares, Automatic detection of motorcyclists without helmet, in Computing Conf. (CLEI), XXXIX Latin American, Oct 2013, pp. 17.

    8. R. Rodrigues Veloso e Silva, K. Teixeira Aires, and R. De Melo Souza Veras, Helmet detection on motorcyclists using image descriptors and classifiers, in Procs. of the Graphics, Patterns and Images (SIBGRAPI), Aug 2014, pp. 141148.

    9. Z. Zivkovic, Improved adaptive gaussian mixture model for background subtraction, in Proc. of the Int. Conf. on Pattern Recognition (ICPR), vol. 2, Aug.23-26 2004, pp. 2831.

    10. C. Stauffer and W. Grimson, Adaptive background mixture models for real-time tracking, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), vol. 2, 1999, pp. 246252.

    11. A threshold selection method from gray-level histograms, IEEE Transactions on Systems, Man and Cybernetics, vol. 9, pp. 6266, Jan 1979.

    12. N. Dalal and B. Triggs, Histograms of oriented gradients for human detection, in Procs. of the IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR), June 2005, pp. 886893. [13] D. G. Lowe, Distinctive image features from scale-invariant keypoints, Int. journal of computer vision, vol. 60, no. 2, pp. 91110, 2004.

  1. Z. Guo, D. Zhang, and D. Zhang, A completed modeling of local binary pattern operator for texture classification, IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1657 1663, June 2010.

  2. L. Van der Maaten and G. Hinton, Visualizing data using t- sne, Journal of Machine Learning Research, vol. 9, pp. 25792605, 2008.

Leave a Reply