Accident Alert System including Traffic Sign Classification

DOI : 10.17577/IJERTV11IS050355

Download Full-Text PDF Cite this Publication

Text Only Version

Accident Alert System including Traffic Sign Classification

1Madhuri Mudigonda, 2Niteesh Kumar Satyapu, 3Dr. Mohan Dholvan

[1,2,3] Department of Electronics and Computer Engineering Sreenidhi Institute of Science and Technology Yamnampet, Ghatkesar, Hyderabad, India.

Abstract To decrease the accident rate, we have built a project which alerts the drivers to follow the traffic rules. The system involves two parts: identifying traffic signs by classifying the traffic sign images and even during the night and in bad weather conditions and alerting the driver with a voice note. Second, if the driver feels drowsy, this system alerts the driver with a beep sound. If the driver falls asleep continuously, then an alert message and geographical location of the person will be sent to their emergency contact. Therefore, this system alerts the driver and helps to overcome road accidents.

KeywordsAlerting system, CNN, Drowsiness detection, Sharing geographical location, SVM, Traffic sign classification.

  1. INTRODUCTION

    People nowadays put a priority on driver safety. According to estimates from World Health Organization, thousands of people lose their lives each year as a result of road crashes. Approximately 1214 road accidents occur every day in India [9]. The main reasons for road accidents are subconscious driving, drunk driving, overspeeding, poor road lighting, beating the red light, distracted driving, and violation of traffic rules [13]. As enabling technologies have advanced, multiple companies, and other researchers, have indeed been investigating and designing driver fatigue solutions to ensure driver safety. As an outcome, the recommended technology intends to minimize traffic fatalities triggered by drowsy drivers.

    In this study, generally, the facial expressions were taken into account. Eye detection, blinking time, PERCLOS (Percentage of eye closure), yawn and mouth detection, MOR (Mouth opening ratio) during a yawn, and head position detection all seem to be part of the facial analysis. The road signs have now been fashioned with distinctive colors and shapes that stand out from the surrounding landscape, making them highly detectable by commuters. They are conceived, produced, and placed by all relevant guidelines. Border, background, and pictogram are the three layout alternatives for a signpost. Aside from the road, the signage is placed in clearly defined areas [14]. Due to the complex surroundings of roadways and the scenes surrounding them, identifying and acknowledging road signs could be problematic. The colour of the sign fades with time due to prolonged exposure to sunlight and the reaction of

    the paint with the air [14].

    In this study, a Convolutional neural network is used for classifying the input image and displays the traffic sign in the form of text and alerts with a voice note.

  2. LITERATURE SURVEY

    Numerous attempts to build this technology have already been documented in the research.

    Hasan Fleyeh and Mark Dougherty [1] built an immutable system and support vector machine for traffic sign classification. The SVM classifier is trained by employing features extraction calculated from the signing threshold and pace sign finishings of 350 and 250 photographs. For sign edges, the most significant result was

    98 percent, whereas for speed limit signs, this was 93 percent.

    Zhao Dongfang, 'Kang WenjingLi TaoLiu Gonglian [2] proposed a Traffic sign classification network using the inception module, SVM, CNN. Since this model is trained on the GTSRB data set, this can attain a classification performance rate of 98 percent. At roughly the same time, they evaluated the network's accuracy on the MNIST dataset and the pneumonia dataset.

    Priya Garg, Debapriyo Roy Chowdhury, and Vidya [3] used the technology YOLOv2 (You only look once), Faster R-CNN, Pretrained CNN, and TensorFlow. On analysis, YOLOv2 exceeds quick and efficient RCNN and SSD by

    3.5 percent and 21%, accordingly, in terms of accuracy. YOLOv2 also learned three times greater than Faster RCNN, with greater accuracy in road sign recognition.

    Manjiri Bichkar, Suyasha Bob hate, and Prof. Sonal Chaudhari [4] developed Deep learning that is being used to classify and detect traffic signs. Following the German Traffic Sign Recognition Benchmark, images of Indian Traffic Signs will be identified using the Indian Dataset, which will be used as a testing dataset while building a classification model.

    Belal ALSHAQAQI, Abdullah Salem [5] developed a Driver drowsiness detection system with the help of image contours and the SAD Algorithm (Sum of Absolute Differences). This device can monitor the driver's condition in real-world day and night situations using an infrared (IR) camera. The symmetry principle is used to achieve face and eye detection.

    Dini Adni Navastara, Widera Yoza Mahana Putra, and Chastine Fatichah [6] developed Identifying Drowsiness (Percentage of eye closure) PERCLOS, real-time, Support

    Vector Machine, Uniform Local Binary Pattern, depending upon Facial Landmark and Uniform Local Binary Pattern with the support of Funnel-structured cascade.

    Burcu Kr Savas and Yasar Becerikli [7] worked on the drowsiness detection. The SVM Algorithm is used to detect driver fatigue in real time. The suggested method contains five phases to extract features from the video: Percentage of closure, count of yawn, mouth opening internal zone, eye blink count, and head detection.

  3. METHODS USED

      1. Dlib:

        Dlib is a C++ library that supports real-time applications and contains machine learning methods. A well before facial feature detector from the dlib package is used to estimate the position of 68 (x,y) coordinates that map face landmarks on the facial zone. In terms of evaluating the shape of the facial area, detecting facial landmarks might be crucial. The dlib library helps to identify and track the facial expressions of drivers in actual footage. As a result, shape prediction methods were used to evaluate significant facial structures in the face zone.

      2. Open CV:

        OpenCV was first introduced by Intel, and it is expected to be widely applied in computer vision. It's an open-source library written in C that supports C, C++, Python and Java programming languages and runs on operating systems like Windows, Linux, and other platforms. This library can be used to extract relevant information from a photograph or a video. Movement detection, camera calibration, face detection, and recognition are just some of the tasks that computer vision and image processing algorithms could be used for.

      3. Geopy:

        Many prominent geospatial web applications can be used with Geopy as a Python agent. Geopy makes it simple for users to find the coordinates of cities, addresses, countries, and landmarks everywhere in the world using third-party geocoders and other data sources.

      4. Twilio:

        Twilio may be a customer engagement platform used for communication with the people and used by many companies. The Twilio communication may be in terms of voice, text, chat video, and email through API, making it easy for each organization to be in touch with customers on the channels they prefer.

      5. Tkinter:

        Tkinter is a Python interface to Tk, and it is the GUI toolkit. Tk is an open-source, cross-platform widget toolkit that may be used to create graphical user interfaces in a variety of computer languages [17].

      6. CNN:

        CNN is a supervised type of deep learning most commonly pplied to analyze visual images. CNN's are more effective for image classification, object detection, and image recognition. It uses a special technique called Convolution. A CNN is a powerful

        tool but requires large labeled data set for training.[18].

      7. gTTS:

    To interface with Google translates text-to-speech API, use the Python module gTTS (Google Text-to-Speech). Save audible mp3 data file (or a file-like object) for editing later. GTTS is a speech-specific sentence tokenizer that enables the reading of any length of text [19].

  4. PROPOSED SYSTEM

    The contribution of this project will be two folds First is, identifying traffic signs by classifying the traffic sign images and even during the night and in bad weather conditions, and alerting the driver with a voice note. Second, if the driver feels drowsy, this system alerts the driver with a beep sound. If the driver falls asleep continuously then an alert message and geographical location of the person will be sent to their emergency contact.

    Fig 1. Block Diagram

      1. Traffic Sign Classification

        The Training folder contains 43 folders, everything is separated by a different class. The folder represents 0 to 42. All the images were added with a specific label. Python Image Library is used to open an image into an array (list). More than 12000 images of the German dataset are given for the training and 400 traffic sign images of the Indian dataset are tested with 98% of accuracy and developed into a CNN model. The code is written in Python with the utilization of the Tensor Flow library.

      2. Alerting System

        We used gTTS module helps in converting of any text to speech. The inputs of the text can be in any language and it converts to speech concerning that language. As a result, the gTTS is a modifiable speech-specific sentence tokenizer that can scan any length of text.

      3. Face Detection

        The initial step is to read the video frame by frame as input. Scaling with area interpolation is used to resize each frame to 480×270 pixels. With the help of Dlib library and facial landmarks, the face of the person can be detected.

      4. Facial Landmarks:

        Human Face has 68 landmark coordinates by the help of those landmarks human faces can be detected.

        Fig 2. Facial Landmarks (68 landmarks)

        There are 68 facial landmarks ranging from 0 to 67. In that left eye, landmarks are from 36 to 41 and right eye landmarks are from 42 to 45. The mouth landmarks are from points 48 to 67. Therefore, these points are used to identify the features of the eyes and mouth that help in the detection of blinking of the eye and yawning.

        Fig 3. Eye landmarks

      5. Drowsiness Detection:

        If a person's eyes are blinking, they are supposed to be considered as feeling drowsy or sleep. Using the EAR, every closed and open eye will be counted throughout the interval. If eye aspect ratio is more than or equal to 75% then the eyes are closed in a certain period, the system will deliver a warning. The extracted eye landmarks and the distance between them can be identified using the EAR (Eye Aspect Ratio). The yawning is measured using the MAR (Mouth Aspect Ratio). When the eyes are closed the EAR approaches 0.

        When the eyes are open, the EAR will have some value other than zero. From the footage, you can observe the condition of the closed eye and the open eye. If the EAR is less than or equal to 0.25, then the person blinks his eye. The mouth landmarks range from 48 to 67, as shown in fig 2. Based on the points as shown in fig 4, we can calculate the Mouth Aspect Ratio. If the MAR is greater than or equal to 0.3, then the person is yawning. Therefore, if both EAR and MAR ratios are satisfied, then the person is said to be drowsy. If the person feels drowsy, this system alerts the driver with a beep sound.

        Fig 4. EAR, MAR ratios

      6. Geographical Location

    Geopy is a Python package that allows users to use third- party geocoders to get the coordinates of cities, addresses, countries, and landmarks all around the world. In this project, if the driver falls asleep continuously, then an alert message and geographical location of the person will be sent to their emergency contact.

  5. DESIGN FLOW

    The flow chart of this project design is shown below in fig 5 and fig 6. As fig 5 shows the design flow of Traffic sign classification, and fig 6 shows the design flow of the Drowsiness detection of the driver and also alerts the driver, getting the geographical coordinates of the driver's location.

    Fig 5. Traffic Sign Classification

    Fig 6. Drowsiness Detection

  6. RESULTS

    After experimenting, we have come up with some results. The traffic sign image will be taken as input. Next, it classifies the image by using the CNN model and displays the output with an accuracy of 98 percent. This system also detects the drowsiness of the person and alerts the driver with a beep sound. If the driver feels drowsy for more time, then the system will send the alert message and geographical location to their emergency contact.

    This project file is an extensible computer file that can be used on any computer without any preinstalled software. But we need only the facial landmarks dlib model and traffic sign classification model to execute the file.

    Fig 7. Accuracy of CNN model

    Fig 8. Result GUI window

    Fig 9. Message from Twilio

    Fig 10. Location Shared to Emergency Contact.

  7. CONCLUSION

This system identifies all the traffic signs even during the night and in bad weather conditions, which can help people recognize the traffic signs from a long distance. It also helps the driver when the person feels drowsy and alerts the driver with a beep sound. If the driver falls asleep continuously, an alert message and geographical location will be sent to their emergency contact.

Therefore, this system alerts the driver and helps to overcome road accidents.

REFERENCES

[1] Hasan Fleyeh and Mark Dougherty "Traffic sign classification using Invariant Features and Support Vector Machines," Intelligent Vehicles Symposium, Eindhoven University of TechnologyEindhoven, The Netherlands, June 4-6, 2008 IEEE.

[2] Zhao Dongfang, Kang WenjingLi TaoLiu Gongliang "Traffic sign classification network using inception module," School of Information Science and Engineering Harbin Institute of Technology, Weihai, China, Instruments.

[3] Priya Garg, Debapriyo Roy Chowdhury Vidya N "Traffic Sign Recognition and Classification Using YOLOv2, Faster RCNN and SSD", College of Engineering, Pune, SPPU University, India, 10th ICCCNT – 2019 July 6- 8, 2019, IIT – Kanpur Kanpur, India IEEE.

[4] Manjiri Bichkar, Suyasha Bobhate, Prof. Sonal Chaudhari "Traffic Sign

Classification and Detection of Indian Traffic Signs using Deep Learning," Department of Computer Engineering, Datta Meghe College of Engineering Airoli, Navi Mumbai, Maharashtra, India Volume 7, Issue 3, MayJune-2021, IJSRCSEIT.

[5] Belal ALSHAQAQI, Abdullah Salem BAQUHAIZEL, Mohamed El Amine OUIS, Meriem BOUMEHED, Abdelaziz OUAMRI, Mokhtar KECHE "DRIVER DROWSINESS DETECTION

SYSTEM," Laboratory signals and images (LSI) University of Sciences and Technology of Oran Mohamed Boudiaf (USPTO- MB) Oran, Algeria, 2013 8th International Workshop on Systems, Signal Processing and their Applications (WoSSPA).

[6] Dini Adni Navastara, Widhera Yoza Mahana Putra, and Chastine Fatichah, Informatics Department, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia JICETS 2019 ,Journal of Physics: Conference Series IOP Publishing.

[7] Burcu Kr Savas, Yaar Becerikli Computer Engineering Department Kocael University/Kocaeli, Turkey,2018 6th International Conference on Control Engineering & Information Technology (CEIT), 25-27 October 2018, Istanbul, Turkey.

[8] Traffi sign,

https://www.godigit.com/content/dam/godigit/directportal/en/cont ent hm/mandatory-traffic-signs.jpg

[9] Graphical analysis of road accidents, https://sites.ndtv.com/roadsafety/importantfeature-to-you-in-your- car-5/

[10] Survey of road accidents in India, https://www.livemint.com/news/india/ In2020-1-20-lakh-people- died-in-road accidents caused-by-negligence 11632042965902.html.

[11] Traffic sign recognition, https://www.researchgate.net/profile/Ho ang-Van Dung/publication

/323161448/figure/fig2/AS:669046253035545@1536524465312/

Overview-of-traffic-sign- recognitionarchitecture.png

[12] Dataset, https://www.kaggle.com/valentynsichkar/traffic-signs- classification-withcnn

[13] Causes of road accidents, https://www.bajajfinservmarkets.in/insu rance/motor-insurance/traffic-rules-signs and-violations/common- causes-of-roadaccidents-in-india.html

[14] S. Vitabile, A. Gentile, and F. Sorbello, "A neural network-based automatic road sign recognizer," presented at The 2002 Inter. Joint Conf. on Neural Networks, Honolulu, HI, USA, 2002.

[15] Geopy, https://pypi.org/project/geopy/

[16] Twilio, https://www.twilio.com/thecurrent/what-is-twilio-how- does-it-work

[17] Tkinter,

https://www.pythontutorial.net/tkinter/#:~:text=Tkinter%20is%20 the%20Python%20interface,languages%20to%20build%20GUI% 20programs.

[18] CNN, https://www.analyticsvidhya.com/blog/20 2 1/ 0 5/ c o n v ol uti on al – n e u r al – n etw o r k s – cnn/#:~:text=In%20deep%20learning%2C%20a%20convolutiona l,a%20special%20technique%20called%20Convolution.

[19] gTTs, https://pypi.org/project/gTTS/

Leave a Reply