Drowsiness Detection Mobile Vision Face API

DOI : 10.17577/IJERTV11IS010087

Download Full-Text PDF Cite this Publication

Text Only Version

Drowsiness Detection Mobile Vision Face API

G Mahalakshmi

Teaching Fellow

Department of Information Science and Technology, Anna University, CEG Campus, Chennai.

Sharan Kumar S

Student – MCA,

Department of Information Science and Technology, Anna University, CEG Campus, Chennai.

Kishok Kumar M

Student – MCA,

Department of Information Science and Technology, Anna University, CEG Campus, Chennai.

Gokul M

Student – MCA,

Department of Information Science and Technology, Anna University, CEG Campus, Chennai.

Balamurali S

Student – MCA,

Department of Information Science and Technology, Anna University, CEG Campus, Chennai.

Abstract: Absence of forbearance among drivers, fatigue and irresponsible behaviour among drivers result in countless fatal crashes and road traffic injuries. Driver drowsiness is a highly problematic issue which impairs judgment and decision making among drivers resulting in fatal motor crashes. This paper describes a simple drowsiness detection approach for a smartphone with Android application using Android Studio 4.4.2. and Mobile Vision API for drowsiness detection before and while driving. Physiological analysis and a quick facial analysis were performed to check drowsiness before the driver starts driving. Facial analysis was undertaken by Google Vision API which determined the head position, blinking duration. Blinking duration is used to indicator for drowsiness. A performance accuracy of drossiness detection proved to be around 90.8%.

Keywords Android, Artificial Intelligence & Machine learning, Autonomous Vehicle Technology, Drowsiness Detection, transportation safety, driver fatigue, Mobile Vision API.

  1. INTRODUCTION

    Drowsiness is one of the significant reasons for road crashes that results in considerable damaging consequences to the individuals who suffer fatal or non-fatal injuries, property damage and economic losses to the nation.

    We are familiar with the hazards of drinking and driving or even texting and driving, but many people underestimate the dangers of drowsy driving. Each year, drowsy driving accounts for about 100,000 crashes, 71,000 injuries, and 1,550 fatalities, according to the National Safety Council (NSC). Drowsy driving contributes to an estimated 9.5% of all crashes, according to AAA. However, the actual number may be much higher as it is difficult to determine whether a driver was drowsy at the time of a crash. In many cases, drowsy driving is as dangerous as driving while impaired by alcohol. Common symptoms that have been identified during drowsy driving include: constant nodding, difficulty opening eyes, missing road signs and turns, frequent lane drifting and difficulty in maintaining speed.

    According to National sleep foundation and experimental values obtained, it was confirmed that the factors that contribute to drowsy driving include: participants with less than 7 hours of sleep, participants with sleep d i s o r d e r s , driving at late hours, and frequent traveling through different time zones (commercial drivers), and working late night shifts or long shift hours, influence of medication, stress and sedentary lifestyles. It was observed that drunk driving also led to drowsy driving since alcohol affects the brain cells and causes sleepiness depending on blood alcohol concentration. It took around 30 minutes for alcohol to start affecting a person through tested results. It was also confirmed that alcohol enters the bloodstream within about 20 minutes before it starts to affect the person. External factors observed that affect drowsiness level include environmental conditions like temperature and humidity inside the vehicle.

    The automobile industry is focusing on drowsiness detection systems to offer better quality of service in terms of driver- assistance.

    Different approaches of drowsiness detection have been investigated which mainly involved driver behavioral measures, physiological measures and vehicular based measures.

    Driver behavioral measures include facial detection and analysis, eye tracking and head movement. PERCLOS was recognized in the past according to Walter Wierwille and his colleagues as a reliable and valid measure of driver fatigue through their real- time measures of alertness system. PERCLOS is a widely used index that calculates drowsiness by measuring the percentage of time a persons eyes are closed from 80% to 100% and does not reflect on blinks.

    A highly accurate system aims at identifying dangerous vehicle manoeuvres by a drunk driver and alerting the driver based on sensor readings or calling the police before any accident actually occurs. In another paper, alcohol intoxication detection is enabled through a system comprising of embedded system board Raspberry-pi and Python with Open-CV. The system uses

    computer vision alongside an alcohol gas sensor application. To determine driver fatigue in real time, an electroencephalogram (EEG) based detection system was developed that processes the EEG signals using pulse coupled neural network, whereby the neural process of driver fatigue was examined. Additionally, a model was proposed that brought a distinctive and innovative approach such that drowsy detection was predicted using lane heading difference metric alongside fatigue measures including driver reaction time and oculomotor movement. The vehicle heading metric

    recorded the difference between the direction of the vehicle and the tangential direction of the vehicle, both measured in degrees.

    An approach was presented of vehicular based measurement using smartphone micro-electro-mechanical system (MEMS) sensors, accelerometer and gyroscope that detected sudden abnormal change in speed, abnormal steering; continuous and careless lane changing as well as checking if the driver used a smartphone while driving. Vehicle movement detection was defined through axes and the yaw angles (between the x, y and z planes) through rotation variations and acceleration which determined sudden speeding and slope detection algorithm. A smartphone-based system for the detection of drowsiness in automotive drivers was proposed using the percentage of eyelid closure obtained from images of the front camera of the smartphone and ratio of voiced to unvoiced speech data from the microphone. Visual indicators such as head nod, head rotation and eye blinks from smartphone images with advanced computer algorithms while driving were used to detect driver fatigue.

    Driver monitoring has been an important field of advanced study and research for so many years. Many techniques have been developed, albeit very complicated methods and hardware has been used. There is no such pattern of driver monitoring in Mauritius except people who own vehicles equipped with driver aid aimed at guiding the driver depending on conditions. The importance for driver recognition, monitoring and infotainment control has been recognized, but still Mauritius lacks behind those facilities, whereby only a small part of the population can benefit from the systems.

    Previous approaches to determine drowsiness have used computer vision, sensors or complex strategies using artificial intelligence for drowsiness detection and classification process while driving [16-22]. The aim of this paper is to use a smartphone as a practical method of determining drowsiness before and while driving.

    The original contributions of this paper are (1) to develop a simple and user friendly and non-invasive android application for a smartphone to detect drowsiness, (2) to detect drowsiness before the driver entersthe vehicle using physiological and facial analysis, (3) to detect drowsiness while driving using facial analysis.

    The paper is organized as follows. Section 2 describes the research methodology. The experimental tests and results obtained are discussed in section 3. Section 4 summarizes the conclusions.

  2. METHODOLOGY

      1. System model and equipment

        This section describes the implementation details and programming techniques that have been used to enable the system to work effectively. In this work, an application has been proposed for the same purpose of drowsiness detection using a dedicated smartphone to process a stream of picture frames of the finger, analyzing the blood flow and facial analysis. Figure 1 illustrates the system model.

        The application will timely and consistently monitor driver fatigue while the person is driving by checking the number of blinks and eye drowsiness. The application alerts the driver in case of positive detection of drowsiness analyzed before and while driving. The Global Positioning System (GPS) system gets the latitude and longitude and computes the speed of the vehicle according to distance travelled through a mathematical analysis. The persistent sequence of driver monitoring lasts till the destination is reached. Results are analyzed every one minute and the timer set to check blinking. The project aims at using no external hardware or any gadgets which the driver otherwise might feel uncomfortable to wear except a mobile phone.

        Fig. 1. System model

        It is mostly suitable to use an android application and an android platform for the system since it allows customization in any way the user wants giving it an awesome user interface, it is an independent device whereby no big external hardware support is required as required for EEG, Electro- Oculogram (EOG) except for a smart phone holder to hold the smartphone during driver drowsiness detection. Redmi note 8 pro was used for the experiment. Android Studio with minimum Software Development Kit (SDK) 16 and Target SDK 28 was used.

        Android studio is an Integrated Development Environment (IDE) for Androids operating system which is constructed on JetBrains IntelliJ IDEA software and is a flexible Gradle-based build system. It allows developers to build and test applications in various devices with a feature- rich emulator and allows programming in Java and even Kotlin in Android Studio 4.0 or late. Moreover, it allows C and C++ with its Native Development Kit (NDK) support, includes a built-in support for Google Cloud

        Platform and provides updates for better performance and version compatibility and new features. Most importantly, it is capable of app-signing and proGuard. The application was developed using Android Studio 4.4.2.

        In addition, the Google APIs allows the system to access the services anywhere from the mobile device hence, making it a low cost system.

      2. Participants

        20 different adults between 18 to 60 years of age participated in 153 tests under different conditions. Sleep deprivation, alcohol consumption, influence of medication and lack of physical activity (at rest) were the different conditions tested. Also, different times of the day were used to undertake the tests. The drivers were tested while driving on country roads in good road weather and brightness conditions.

        Figure 2 System architecture

        Fig. 2. System architecture

  3. SYSTEM ALGORITHM

    Face detection is the process of automatically locating human faces in visual media (digital images or video). A face that is detected is reported at a position with an associated size and orientation. Once a face is detected, it can be searched for landmarks such as the eyes and nose.

    Face recognition automatically determines if two faces are likely to correspond to the same person. Note that at this time, the Google Face API provides functionality for face detection and not face recognition.

    Face tracking extends face detection to video sequences. Any face appearing in a video for any length of time can be tracked. That is, faces that are detected in consecutive video frames can be identified as being the same person. Note that this is not a form of face recognition; this mechanism just makes inferences based on the position and motion of the face(s) in a video sequence.

    A landmark is a point of interest within a face. The left eye, right eye, and nose base are all examples of landmarks. The Face API provides the ability to find landmarks on a detected face.

    Classification is determining whether a certain facial characteristic is present. For example, a face can be classified with regards to whether its eyes are open or closed. Another example is whether the face is smiling or not.

  4. APPLICATION IMPLEMENTATION The application is categorized into three layers namely:

        • The Presentation layer (MainActivity.java)

        • The main activity is presented to the user when the app is launched. The main activity can then start other activities to perform different actions

        • The Business layer (Facial Monitoring)

          • The Service Layer (Alerting the user when drowsiness is detected)

    On launching the application, the MainActivity.java is executed. It has a relative layout which displays the child views in relative positions. The relative layout eliminates the need for several nested LinearLayout groups.

    Android provides different ViewGroups, namely, ConstraintLayout which provides a flat view hierarchical design, LinearLayout which allows all child views in a single direction while the RelativeLayout displays them. A FrameLayout displays a single view.

    Android explicit intents are used to invoke the external classes, namely facialonly.java and drowsiness.java. The startActivity() method invokes the activity. The intents used are used to launch the different activities.

    The application will require permissions to run on the mobile device. Permissions that need to be accessed include the camera, sensors on the mobile device, location, network state, storage, phone call and vibration.

    Fig. 3. Pose angle estimation.

    1. The coordinate system with the image in the XY plane and the Z axis coming out of the figure. (b) Pose angle examples where y==Euler Y, r==Euler Z.

    The Euler X, Euler Y, and Euler Z angles characterize a faces orientation as shown in Fig. 1. The Face API provides measurement of Euler Y and Euler Z (but not Euler X) for detected faces.

    The Euler Z angle of the face is always reported. The Euler Y angle is available only when using the accurate mode setting of the face detector (as opposed to the fast mode setting, which takes some shortcuts to make detection faster). The Euler X angle is currently not supported.

  5. Landmarks

    A landmark is a point of interest within a face. The left eye, right eye, and nose base are all examples of landmarks. The figure below shows some examples of landmarks:

    Fig. 4. Landmarks

    Rather than first detecting landmarks and using the landmarks as a basis of detecting the whole face, the Face API detects the whole face independently of detailed landmark information. For this reason, landmark detection is an optional step that could be done after the face is detected. Landmark detection is not done by default, since it takes additional time to run. You can optionally specify that landmark detection should be done.

    The following table summarizes all of the landmarks that can be detected, for an associated face Euler Y angle:

    Euler Y angle

    detectable landmarks

    < -36 degrees

    left eye, left mouth, left ear, nose base, left cheek

    -36 degrees to -12 degrees

    left mouth, nose base, bottom mouth, right eye, left eye, left cheek,left ear tip

    -12 degrees to 12 degrees

    right eye, left eye, nose base, left cheek, right cheek, left mouth, right mouth, bottom mouth

    12 degrees to 36 degrees

    right mouth, nose base, bottom mouth, left eye, right eye, right cheek, right ear tip

    > 36 degrees

    right eye, right mouth, right ear, nose base, right cheek

    Table 1 – landmarks that can be detected Each detected landmark includes its associated position in the image.

  6. CLASSIFICATION

Classification determines whether a certain facial characteristic is present. Classification is expressed as a certainty value, indicating the confidence that the facial characteristic is present. For example, a value of 2 or more for the eye classification indicates that it is likely that a person is drowsiness.

Table 2 – Description of methods by Google Face API

Some public methods by Google Face A

Description

getLandmarks()

Returns the facial landmarks

getContours()

Used with setLandmarkType(int) to get the counters

getWeight()

Returns the faces width in pixels

getHeight

Returns the faces height in pixels

getId()

Gets the face ID to identify face from frames

getPosition()

Gets the top left faces position

getIsLeftEyeOpenProbability()

Gives probability of left eye being open

getIsRightEyeOpenProbability()

Gives probability of right eye being Open

The CameraSourcePreview.java class resizes the graphic overlay to give the size aspect ratio in terms of height and width similar to that of the phones screen so that it fits within the displayed screen on the phone.

The graphic overlay view gives the face position, landmarks specified and the orientation. Vision dependencies are included in AndroidManifest.xml file whereby GMS allows libraries to be downloaded for face detection.

Euler Y and Euler Z are angles that identify the faces orientation and allow face position detection. The Face API also allows smiling and eyes opening classifications that allow determining the probabilities of eye opening and smile probability.

7 TESTS AND RESULTS

During application testing, all permissions including camera, location and storage had to be enabled on the smartphone for the application to work properly.

7.1 Facial analysis testing and results

Figure 5 shows the result for determining drowsiness through facial analysis. The first picture shows that the subject was not drowsy. At a different instance as shown in the second result, the subject eyes was closed in the first 2 seconds. The latter was considered as drowsiness detected and Alarm sound will start to ring .

Fig. 5. Facial analysis testing and results

It can be deduced that facial analysis performed on the subjects improved the accuracy of the drowsiness detection process by a quick analysis of the face with the eye opening probability and smiling probability to give the blinking or if the persons head is tilted in certain degrees not allowing accurate facial detection and analysis. Table8 shows that the front facing camera yields better results with an accuracy of up to 90.8% than placing the camera at an angle since it can measure the eye probability more accurately.

Table 3 – Results for front facing camera compared to camera at an angle.

Scenario

Number of times tested

Accurate results

Percentage success

Front Facing Camera

130

118

90.8%

Camera at an angle

90

21

23.3%

The head orientation pose angle determines the head position of the person. Different angles for EulerY and EulerZ have been tested to get head positions. EulerX is not supported in Face API. Table 4 shows the correct eye blinking detection for different head positions.

Table 4 – Head positions and correct eye blinking detection

Head Position

Correct eye blinking detection

No head tilt

Yes

Facing straight right Facing right

Yes

Facing slightly right up

Yes

Facing right up Face tilted to right

Yes

Facing left up

No

Face Slightly tilted to left

Yes

It can be seen from table 4 that the eye blinkingdetection is incorrect when the head is facing left up only. Through facial features inspection improved the s y s t e m performance to approximately 90.8%.

Fig. 6. Accuracy results for the various analysis

Fig. 7. Accuracy for the whole system.

8. CONCLUSION

The aim of this work was to design and implement a user-friendly driver monitoring and drowsiness detection application. Android Studio 4.4.2 software was used for developing the application. The application mainly tested the facial analysis of driver and . Mobile vision Face API was used for face detection. The face orientation was computed through pose angle estimation in the Y and Z plane. Previous projects have used machine learning and AI algorithms for training data and data classification which ultimately provided very accurate results while driving. This work, however, aimed at providing ease of use, availability, reduced cost and privacy since data was stored in the user phone before and while driving. The driver could be tested for drowsiness even before driving. It could be concluded that physiological analysis for drowsiness detection yielded an accuracy of around 88.5% and is comparable to accuracies obtained from ECE and EEG sensors. The Drowsiness detection through facial features inspection before driving improved the system performance to approximately 90.8%.

REFERENCES

  1. T. Covington, Thezebra.com, 2020. URL https://www.thezebra.com/resources/research/drowsy-driving- statistics/#:~:text=Drowsy%20driving%20statistics%20in%202019&text=23.6%25%20of%20all%20respondents%20said,least%20once%20in%20thei r%20lifetime.&text=Men%20(32.9%25)%20were%20more,compared%20to%20women%20(22.2%25) .

  2. J. M. Owens, T. A. Dingus, F. Guo, Y. Fang, M. Perez, J. McClafferty and B. Tefft. Prevalence of drowsy- driving crashes: Estimates from a large-scale naturalistic driving study. Research Brief, AAA Foundation for Traffic Safety, Washington, DC, February 2018. URL, https://aaafoundation.org/prevalence-drowsy-driving-crashes-estimates-large-scale-naturalistic-driving-study/

  3. Drowsy driving. URL. https://www.nsc.org/road-safety/safety-topics/fatigued-driving

  4. Facts about drowsy driving. URL, https://drowsydriving.org/wp-content/uploads/2009/10/DDPW-Drowsy-Driving-Facts.pdf , 2020.

  5. Meet Android Studio. URL, https://developer.android.com/studio/intro .

  6. Driver drowsiness detection. URL https://www.bosch-mobility-solutions.com/en/solutions/interior/driver-drowsiness-detection/ , 2020.

  7. D. F. Dinges and R. Grace. PERCLOS: A valid psychophysiological measure of alertness as assessedby psychomotor vigilance. US Department of Transportation, Federal Highway Administration, Publication umber FHWA-MCRT-98-006, 1998. URL, https://rosap.ntl.bts.gov/view/dot/113/dot_113_DS1.pdf?

  8. J. Dai, J. Teng, X. Bai, Z. Shen and D. Xuan. Mobile phone based drunk driving detection. 4th International Conference on Pervasive Computing Technologies for Healthcare, pages 1-8, March 22, 2010. URL, https://ieeexplore.ieee.org/abstract/document/5482295

  9. D. Sarkar and A. Chowdhury. A real time embedded system application for driver drowsiness and alcoholic intoxication detection. International Journal of Engineering Trends and Technology, 10(9):461-465, 2014. URL, http://www.ijettjournal.org/archive/ijett-v10p288

  10. H. Wang, C. Zhang, T. Shi, F. Wang and S. Ma. Real- time EEG-based detection of fatigue driving danger for accident prediction. International Journal of Neural Systems, 25(02):1550002, 2015. URL, https://pubmed.ncbi.nlm.nih.gov/25541095/

  11. D. M. Morris, J. J. Pilcher and F. S. Switzer III. Lane heading difference: An innovative model for drowsy driving detection using retrospective analysis around curves. Accident Analysis & Prevention. 80:117-24, 2015. URL, https://pubmed.ncbi.nlm.nih.gov/25899059/

  12. F. Li, H. Zhang, H. Che and X. Qiu. Dangerous driving behavior detection using smartphone sensors. 19th International Conference on Intelligent Transportation Systems (ITSC), pages. 1902-1907, 1 November, 2016. URL, https://www.semanticscholar.org/paper/Dangerous-driving-behavior- detection-using-sensors-Li-Zhang/f9d291d05185728b118fac5dc6fcac71d6fdc5fb

  13. A. Dasgupta, D. Rahman and A. Routray. A smartphone- based drowsiness detection and warning system for automotive drivers. IEEE Transactions on Intelligent Transportation Systems, 20(11):4045-4054, 2018. URL, https://www.semanticscholar.org/paper/A-Smartphone-Based-Drowsiness-Detection- and-Warning-Dasgupta-Rahman/1d80267c3685e7cc87297fc3b23d2a40fee82f01

  14. I. Chatterjee and S. Roy, "Smartphone-based drowsiness detection system for drivers in real-time," 2019 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS), Goa, India, 2019, pp. 1-6, doi: 10.1109/ANTS47819.2019.9117943. URL, http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S2309-89882021000100004

  15. J. He, S. Roberson, B. Fields, J. Peng, S. Cielocha and J. Coltea. Fatigue detection using smartphones. Journal of Ergonomics, 03:1-7, 2013. URL, https://www.longdom.org/abstract/fatigue-detection-using-smartphones-20707.html

  16. W. Deng and R. Wu. Real-time driver-drowsiness detection system using facial features. IEEE Access, 7:118727-118738, 2019. URL, https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8808931

  17. S. Hachisuka, T. Kimura, K. Ishida, H. Nakatani and N. Ozaki. Drowsiness detection using facial expression features. SAE Technical Paper, 2010.URL, https://saemobilus.sae.org/content/2010-01-0466

  18. Q. Abbas. HybridFatigue: A real-time driver drowsiness detection using hybrid features and transfer learning. International Journal of Advanced Computer Science and Applications, 11(1), 2020. URL, https://thesai.org/Publications/ViewPaper?Volume=11&Issue=1&Code=IJACSA&SerialNo=73

  19. M. Awais, N. Badruddin and M. Drieberg. A hybrid approach to detect driver drowsiness utilizing physiological signals to improve system performance and wearability. Sensors, 17(9):1991, 2017. URL, https://pubmed.ncbi.nlm.nih.gov/28858220/

  20. R. Bhardwaj, P. Natrajan and V. Balasubramanian. Study to determine the effectiveness of deep learning classifiers for ECG based driver fatigue classification. IEEE 13th International Conference on Industrial and Information Systems (ICIIS), pages 98-102, December 2018. URL, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8123273/

  21. G. Yang, Y. Lin and P Bhattacharya. A driver fatigue recognition model based on information fusion and dynamic Bayesian network. Information Sciences, 180(10):1942-1954, 2010. URL, https://www.sciencedirect.com/science/article/abs/pii/S0020025510000253

Leave a Reply