Driver Distraction Apprehension using Face Perception

Download Full-Text PDF Cite this Publication

Text Only Version

Driver Distraction Apprehension using Face Perception

[1] S. R. Sridhar,

Assistant Prof, CSE Muthayammal Engineering College

[2] K. Manimekalai

UG Student, CSE Muthayammal Engineering College

[3] C. Sindhu,

UG Student, CSE Muthayammal Engineering College

[4] S. Mounikapriya,

UG Student, CSE Muthayammal Engineering College

[5] K. Priyanka,

UG Student, CSE Muthayammal Engineering College

Abstract An important application of machine vision and image processing could be driver Distraction detection system due to its high importance. In recent years there have been many research projects reported in the literature in this field. In this paper, unlike conventional Distraction detection methods, which are based on the eye states alone, we used facial expressions to detect Distraction. There are many challenges involving Distraction detection systems. Among the important aspects are: change of intensity due to lighting conditions, the presence of glasses and beard on the face of the person. In this project, we propose and implement a hardware system which is based on infrared light and can be used in resolving these problems. In the proposed method, following the face detection step, the facial components that are more important and considered as the most effective for Distraction, are extracted and tracked in video sequence frames. The system has been tested and implemented in a real environment.

  1. INTRODUCTION

    Distraction is a process in which one level of consciousness is reduced due to lacking of sleep or fatigue and it may cause the driver fall into sleep quietly. When the driver is suffering from Distraction he loses the control of the car, so he might be suddenly deviated from the road and hit an obstacle or a car to overturn. According to NHTSA organization statistics, the first accidents factor is driver sleepiness and in U.S. through the 100000 accidents reported by police, about 76000 bloody crashes happened and 1500 led to death, annually. This number of accident is approximately 1 to 3 percent of accidents provided by the police in U.S. In Australia close to 6 percent of accidents is caused by driver fatigue and Distraction. These statistics in the UK are 16 to

    20 percent of accidents reported by police. Legal Medicine Organization of Iran's research shows that most road accidents statistics in the world happen in Iran. The financial costs inflicted in Iran road due to road accidents are estimated about 4000 million dollars (more than 3.5 percent of GDP). According to Iran's Police department in 2006 and 2007, 23 percent of accidents are caused by drivers fatigue and Distraction. Considering the available statistics, importance of Distraction detection systems is unavoidable. The main objective of this paper is the design and implementation of a hardware system that is able to detect Distraction of drivers,

    especially those diagnosed at the right time to alert. This will prevent many accidents and save countless lives, and reduces the high cost of damages caused by accidents.

    The rest of this paper is organized as follows: Section 2 provides a detailed survey of different Distraction detection methods. In Section 3, overview of the proposed system is presented. Then in Section 4, experimental results are presented. Finally, conclusion will be explained in Section 4.

  2. DISTRACTION DETECTION METHODS Distraction detection techniques, with respect to the

    parameter type used for detection, can be divided into two categories: Intrusive methods, Non-Intrusive methods. The main difference in these two methods is that in intrusive methods, an instrument is connected to the driver and then recorded values are checked. However, these Intrusive approaches have high accurate, are with the person inconvenience, so the low acceptance of these methods.

    According to Distraction detection techniques generally are classified in three groups: methods based on driver state, methods based on driver performance, hybrid methods.

    Methods based on driver state, with respect to the type of parameter used for state detection, are divided into two categories: techniques based on physiological signals and techniques based on image. In the first type of methods, it uses physiological and non-visual symptoms created in the person body due to Distraction. For this purpose, first the electrodes are connected to driver body and then they will record electrical activity in different parts of the body including brain, muscle, heart and etc. Analyzing the recorded amounts determines the level of driver Distraction. Study of Jap et al., the use of 30 electrodes on a group of drivers, showed that with increasing sleepiness, theta and alpha activity level increased. However, these methods have good accuracy but are not recommended for practical applications due to being intrusive and create discomfort in the driver. Fatigue and sleepiness are creating a series of apparent signs in the face of person that these signs are used as the basis of image based methods. Often, the first step in image based methods is to detect the person face in the image. Then desired parameters for

    Distraction will be extracted and their values are used to determine sleepiness. In the eye state is used to determine Distraction.

    Methods based on driver performance are implemented in its first step by installing sensors in various parts of the vehicle such as steering and accelerator. Then Distraction is detected by processing of received signals from sensors and determine the status of the vehicle. Mortazavi et al. in have proposed examining of the steering wheel angle and lateral displacement in fatigue drivers. The results of this study showed increasing steering angle and lateral displacement changes during sleepiness. Despite these methods that have relatively good accuracy, this technique is not practical due to its high cost.

    In some cases, while the driver remains at short sleep, vehicle lateral position does not change. In such cases a Distraction detection system based on driver performance, will not work correctly. Some methods for solving this problem and also increasing the system detection power have used performance and state parameters composition. Vural et al. in a study have investigated the steering angle and head movement composition in Distraction. The results of this study indicate that correlation between head movement and steering angle parameters in the fatigue driver is higher than the driver alert .

  3. OVERVIEW OF THE PROPOSED SYSTEM

    In the proposed system, the sequence of images are acquired by the proposed hardware and are injected as input to the system. The system consists of four steps: face detection, facial component extraction, facial components tracking, and Distraction detection.

    At the beginning, the background image is generated. The background subtraction method is used to detect face region in the image. In this case, the found region is either satisfies the face conditions which we considered as the face, or the system received next image as input and face detection will repeat until the correct detection and reliable face is found in the image.

    Secondly, the facial components including eyebrow, eye and mouth is extracted on the face found in the previous step. Horizontal projection technique is used to determine eyebrow and eye region and template matching to determine region of mouth. At the end of this phase, a reference template for each of the facial components is extracted using the marked areas for the facial components.

    In the trackng stage, a reference template is used and template matching method is performed for each of the facial components, tracking. It is assumed that the driver's head does not have abrupt changes. The stage output will produce a new region for each facial component.

    Distraction detection phase observes the status of each facial component separately and if any of the three components that dealt with one of the facial expression caused by the fatigue and sleepiness will be created a warning message.

    In our proposed system it can be noted that after the face detection and facial components extraction were properly performed, then for the next frames we dont need to perform the two basic steps and just the two final stages are performed. The proposed system flowchart is given in Fig. 1.

    First Frame

    Camera

    Background Image

    Face Detection

    No Verify Success?

    Yes Face Component

    Detection

    Face Component Tracking

    No Distraction Detection

    Yes Create Alert

    Figure 1. The proposed system flowchart.

    1. Recommended Hardware

      The hardware of our proposed system is divided into two parts: a) images recording by a camera sensitive to infrared light located in front of the driver. b) Infrared light producing, for this purpose was designed it by three sources of infrared lights. Each source consists of a number of IR LED (Infrared Light Emitting Diode). The IR light is active at night in addition to day and, using IR lighting causes the time limit to reduce for the using of the system. Infrared light sources are arranged so that they illuminate the driver face in a specified area and thus a source is placed in the upper range of face and

      two other sources are placed to the sides. Fig. 2 shows the hardware of the implemented system.

      Figure 2. The hardware of the implemented system.

    2. Face Detection

      The First image of dark face without infrared light is obtained and is used as the background image and then it is stored. Then, using infrared lighting creates a bright face image. Fig. 3 shows an example of dark face image and bright face image.

      The bright face image is subtracted from the dark face image and the subtraction image will be produced. The face candidates have been obtained by thresholding of the subtraction image. To determine which of the areas belong to the face, all of the candidates will examine and each region that satisfies characteristics of face is selected as the face. Due to the fact that the face is composed of eyes, nose and mouth, it can be claimed that the area relating to the face at least includes a hole. Also, ratio of length to width in this area is usually about 1. Therefore, an area from the candidate regions that includes the two characteristics, have at least a hole in the face area and length to width ratio from 0.7 to 1.5 is determined as the face region. If more than one region is selected, the larger area will be determined. The subtraction image and the thresholded image are shown in Fig. 4.

      Figure 3. The dark face image (right) and bright face image (left).

    3. Facial Components Extracting

      At this point, the position of any facial component – the eyebrows, eyes and mouth – is obtained separately. These components are used to produce templates. The eyebrows and eyes on the faces of peoples, normally are moved simultaneously, therefore to reduce computational load and increase system speed, detection and tracking operations are done only on the eye and brow in the left side of face. For the eyebrow and eye, areas are detected by horizontal projection method on the region of the face. According to method, horizontal projection on an image with NRows NColumns dimensions is calculated by adding the intensity values of

      pixels to each row.

      hp hpx 1 x NRows

      NColumns

      hpx f ( x, y)

      y 1

      Considering the physiological structure and location of eyebrows in a face, we can say that the first valley in the horizontal histogram of the face belongs to the eyebrow and the second valley is related to eye. Fig. 5 shows the horizontal histogram of a face. Using infrared lighting in the proposed system reduces dependency on the intensity and increases the stability of the horizontal projection.

      Figure 5. The horizontal histogram of a face.

      A possible method for mouth is extracted by using the horizontal projection, in this method the nose position is obtained as the local maximum of face horizontal histogram and the second local minimum one after the nose, belongs to the mouth. This procedure will not detect the mouth properly in cases where a beard and mustache are present at the person face or when the nose coordinates are wrongly specified due to factors such as sunlight. In the proposed system to overcome such situations, the mouth extracting is performed by template matching technique. The similarity criteria needed for template matching is calculated by correlation coefficients obtained by:

      Figure 4. The subtraction image (right) and thresholded image (left). Cov(x, y) ( x) ( y)

      Where x is the reference template and y is the search area. The parameter is the standard deviation and the covariance value between x and y is specified by Cov( x, y) . The value will be changed in the interval [-1, 1].

      Using a fixed template due to the fact that the appearance of different people, oral appearance of mouth whether or not there is a mustache, will not associated with favorable outcome. In addition, almost nasal cavity appearance is the same in people. Due to this fact, a fixed template of the nasal cavity for the nose position detection is used. Having the nose horizontal coordinates and considering the relatively constant distance between the nose and mouth, the mouth area is extracted of the face area.

      Designated regions of the facial components are used to obtain the reference templates. The eye and eyebrow reference templates are obtained the same way and so the extracting image of the desired facial component with the thresholding is converted to a binary image. Then the detected object in binary image is considered as a reference template.

    4. Facial Components Tracking

      At this stage, each of three facial components is tracked separately using template matching technique and position of each component is determined in the successive frames. Some tracking methods based on template matching used a fixed template for a face. In such methods, in case that the face under tracking is different from the face that the fixed template obtained from it, the results of operations will be accompanied with an error. In the proposed method, because the used template is obtained during the face tracking, proper results are obtained. Similarly, large difference between mouth states during opening or closing cause error in the process of template matching. Therefore, updating the mouth reference template at any stage of tracking, yields a good accuracy.

    5. Distraction Detection

      Most existing Distraction detection methods have focused on the eyes and the mouth and examine changes due to one or both of them. In the presented system, in addition to the two mentioned components, eyebrow is examined to increase the detection power of the system. Usually, when the Distraction started, the three components eyebrow, eye and mouth will have the most influence. Intention to keep the eyebrows in the upper position is one of the activities that people do to escape from Distraction. This action happens more in critical positions in which the individual consciousness is important. Also eye is facial key components in Distraction detection. During Distraction, the opening eyes are changed to closed state. Yawning is other activity that people do in fatigue time periodically. Hence, we can detect the opening of mouth consider it as a Distraction symptom.

      In this phase, using the areas obtained from the tracking phase, the vertical exact coordinates of the facial components will beextracted and sleepiness with considering of the displacement rate in mentioned components will be identified.

      Frame 1 Frame 11

      Frame 21 Frame 31

      Frame 41 Frame 51

      Frame 61 Frame 71

      Figure 6. The implementation results of the proposed method for the state of eyebrow raises.

  4. EXPERIMENTAL RESULTS

    The recommended system embedded in a car has been tested in a real driving case. In this system, the images are obtained by infrared light sensitive camera under different lighting conditions (night, day) and from different people (different faces with different ages, complexion, presence or absence of glasses and beard).The images sequence are processed with frame rate equal to 20 frames per second and the images scale with 360 × 240 pixels. The proposed method has been implemented in simulation environment of MATLAB (Simulink). In the proposed system, the hardware is programmed so that it starts infrared lighting after a fraction of a second. Last frame before activating infrared sources is considered as the reference background image or phrase dark face image and the first frame after activation sources of infrared is considered as the bright face image. Following the stages of face detection, and facial components detection, the tracking is performed. The stage of Distraction detection is performed by investigating the three modes of eyebrow raising

    up, eye closing and mouth openness and if any of the mentioned states is stable for a period of specified (e.g. consecutive frames), an alert messages is created related to the Distraction state. Fig. 6 shows the implementation results of the proposed method for a sequence of images. These results are shown and are given as 10 frame spaces.

    Comparison of proposed method and different methods results is not without difficulty due to differences in database and test environments. Accordingly, the characteristics of these methods have been compared. In Table 1 we compared the theory of the proposed system and several non-intrusive Distraction detection systems.

  5. CONCLUSION

In this paper, a hardware-based system was presented for driver Distraction detection using facial expressions. The implemented hardware is based on infrared light. Using infrared light has provided benefits such as simplicity of used methods, independent from environment lighting conditions. In the proposed method, after determining the face region using the background subtraction technique, the facial components are obtained by horizontal projection and template matching. In tracking phase using template matching elements found in the previous step are followed up and eventually the incidence of sleepiness was investigated using determination of the facial state from the changes of the facial components. Based on the previous investigation, three changes, eyebrows raising up, eyes closing and yawning for a certain period are considered as starting symptoms in the driver's Distraction and the system will warn. The results indicate that the system in the presence of glasses or beard and mustache on the driver face will produce appropriate response, too.

REFERENCES

    1. P. R. Knipling, and S. S. Wang, Revised estimates of the us drowsy driver crash problem size based on general estimates system case reviews, The 39th Annual Proceedings of Association for the Advancement of Automotive Medicinea, 1995.

    2. Sridhar S R and Keerthana R, 2016, Diffusion Tensor Imaging with Pattern and Surface-based Morphometry-An Analysis of Brain Activity Changes with Image Compression, American Eurasian Network for Scientific Information Journal for Advances in Natural and Applied Sciences, ISSN:1995-0772,EISSN:1998-1090,PP:122-127.

    3. M. J. Ghazi Zadeh, M. Ghassemi Noqaby, M. Ahmadi, and H. R. Attaran, Effect of human factors on drivers of heavy vehicles fatigue and Distraction using the regression model., The First National Conference of Road and Rail Accidents, Iran, 2009.( In Persian)

    4. Q. Wang, J. Yang, P. Ren, and Y. Zheng, Driver Fatigue Detection: A Survey, The Proceedings of the 6th World Congress on Intelligent Control, 2008.

    5. B. T. Jap, S. La, P. Fischer, and E. Bekiaris, Using EEG spectral components to assess algorithms for detecting fatigue, Expert Systems with Applications, Vol. 36, Iss. 2, Part 1, PP. 2352-2359, 2009.

    6. M. Imran Khan, A. B. Mansoor, A. Campilho, and M. Kamel, Real

      Time Eyes Tracking and Classification for Driver Fatigue Detection, ICIAR 2008, LNCS 5112, pp. 729738, 2008.

    7. A. Mortazavi, A. Eskandarian, and R. S. Sayed, Effect of Distraction on Driving Performance Variables of Commercial Vehicile Drivers, International Journal of Automotive Technology, Vol. 10, No. 3, pp. 391 404, 2009.

    8. E. Vural, M. Jdatcetin, A. Ercil, G. Littlewort, L. Bartlett, J. Movellan, Machine Learning Systems for Detecting Driver Distraction, In- Vehicle Corpus and Signal Processing for Driver Behavior, Springer, PP. 97-110, 2009.

  1. E. Vural, M. Cetin, A. Ercil, G. Littlewort, M. Bartlett, and J. Movellan, Drowsy Driver Detection Through Facial Movement Analysis, HCI'07 Proceedings of the IEEE international conference on Human- computer interaction, 2007.

  2. E. Vural, M. Cetin, A. Ercil, G. Littlewort, M. Bartlett, and J. Movellan, Automated Distraction Detection For Improved Driving Safety, Proc. 4th international conference on automotive technologies, Istanbul, 2008.

  3. M. J. Flores, J. M. Armingol, and A. D. Escalera, Real-Time Distraction Detection System for an Intelligent Vehicle, IEEE Intelligent Vehicles Symposium, 2008.

Leave a Reply

Your email address will not be published. Required fields are marked *