Surveillance System for Automobiles

DOI : 10.17577/IJERTV2IS111003

Download Full-Text PDF Cite this Publication

Text Only Version

Surveillance System for Automobiles

Surveillance System for Automobiles

Mr.K.Abilash Kumar

Student ,M.Tech[embedded system design], kuppam engineering college ,kuppam, Andhra Pradesh.

Abstract

Now a days, Innovation is in the side of utilizing the common needs and not on our safety measurements that which we need to consider. As a common example consider, Road accidents, which was getting to be happen in our day to day life. Due to drivers loss of attention, most of the accidents takes place. As a measure to overcome from this, the visual information of driver will be monitored for the driver attentiveness in cars using this project that which implemented in real time.

Our project main intension is to design and develop a low cost featured device which is based on embedded platform for finding the driver drowsiness. Specifically, our Embedded System includes a webcam placed on the steering column which is capable to capture the eye movements of the Driver to find out fatigue. If the driver is not paying attention on the road ahead and a dangerous situation is detected, the system will warn the driver by giving the warning sounds.

  1. Introduction

    Fatigue has been widely accepted as a main factor causing vehicle accidents. According to the National Highway Traffic Safety Administration (NHTSA) estimates, 100000 police-reported crashes are directly caused by driver fatigue each year, which results in an estimated 1550 deaths, 71 000 injuries, and $12.5 billion losses. In 2002, the National Sleep Foundation

    (NSF) reported that 51% adult drivers had driven a vehicle while feeling drowsy and 17% had actuallyfallen

    asleep.

    The Federal Motor Carrier Safety Ad- ministration (FMCSA), the trucking industry, highway safety advocates, and transportation researchers have all identified driver drowsiness as a high priority commercial vehicle safety issue. Drowsiness affects mental alertness, decreasing an individuals ability to operate a vehicle safely and increasing the risk of human error that could lead to fatalities and injuries. Furthermore, it has been shown to slow reaction time, decreases awareness, and impairs judgment.

    Developing technologies for monitoring the driver fatigue is essential to prevent vehicle accident. People in fatigue exhibit certain visual behaviors that are easily observable from changes in facial features like the eyes, head and face. Visual behaviors that typically reflect a per-sons level of fatigue include eyelid movement, head movement, gaze, and facial expression.

  2. Block Diagram:

    Fig 1.2: BLOCK DIAGRAM

    Representation of SSA

    The webcam captures the video and sends to the laptop, the laptop processes the frames in OPENCV and uses haar-cascade classifier for detecting the eyes, if the eyes is detected the system will send a command to microcontroller, then the microcontroller energizes the relay coil and buzzer will be alarmed. The haar like features uses an XML file and classifies the image .the similar way of haar wavelets.

    Historically, working with only image intensities (i.e., the RGB pixel values at each and every pixel of image) made the task of feature calculation computationally expensive. A publication discussed working with an alternate feature set based on Haar wavelets instead of the usual image intensities. Viola adapted the idea of using Haar wavelets and developed the so-called Haar- like features. A Haar-like feature considers adjacent rectangular regions at a specific location in a detection window, sums up the pixel intensities in each region

    and calculates the difference between these sums. This difference is then used to categorize subsections of an image. For example, let us say we have an image database with human faces. It is a common observation that among all faces the region of the eyes is darker than the region of the cheeks. Therefore a common haar feature for face detection is a set of two adjacent rectangles that lie above the eye and the cheek region.

    The position of these rectangles is defined relative to a detection window that acts like a bounding box to the target object (the face in this case).

  3. Face detection and eye detection are all accomplished by the haar algorithm proposed by Viola and Jones. We find that, due to the complex background, it is not a good choice to locate or detect the right eye in the original image, for it will take much more time on searching the whole window with poor results. So we firstly find the location of face, and re-duce the range in which we will detect the right eye.

    Doing this can improve the tracking speed and correct rate, reduce the affect of the complex background. Besides, we propose a very simple but powerful method to reduce the computing complexity.

    1. Face detection

      Prior to the eye localization, a robust face detector described in it is applied to extract face images from video frames (in-side the green rectangle). This original detector runs less than 10 frames per second when used in video with 640*480 resolutions, which is not an acceptable result we want. We will optimize it to reduce the time it costs in detecting one frame. Here we propose a very simple but powerful method to reduce

      the computational complexity on detecting the face.

      Since it is developed for fatigue monitoring in vehicle environment, we assume that there is only one person in video. So we can make some improvement by reducing the searching region from the whole image to the nearby region around the previous detected face. There are 2 parameters which will greatly influence the computing time in the haar-based object detection algorithm, one is the Region of Interest(ROI) of the image (denoted as FACE_ROI) in which range we will detect the face, the other is the minimum searching window (denoted as MIN_WND). In the beginning, we set the MIN_WND as a very small square and let the FACE_ROI cover the whole image. Once we have detected the face, since most time drivers face will not move rapidly and the size of face will not vary sharply, we can set the FACE_ROI as a square with side length r*6/5 concentric with previous detected face, and set the MIN_WND as a square with side length r*9/10(r is the side length of previous detected face, the factors are empirically set for best result). If no face is detected, the detector will expend FACE_ROI and reduce MIN_WND until face is found or boundary is reached. Using this simple method, we can reduce the time of detecting face in every frame from about 130ms to about 40ms.

      rectangle of face be [0,0,w,h], we set the rectangle of right eye ROI as [0,h/6,w/2,h/2].

      After extract the right eye ROI, we can detect the right eye (inside the small rec-tangle, using the haar algo-rithm in the interest region. Experiment results show that this method can greatly reduce the time on searching the eye. Be-sides, the rate of detecting wrong eye is decreased.

  4. FATIGUE DETECTION

There are many cues which can be used for detecting fatigue, like eyelid movement, head movement, gaze, facial expression , among which the eyelid movement reflects the state of fatigue most well. Here we propose an automatic threshold method based on histogram to get the eye contour, and compute the distance of eyelid. We find that this method works very well.

4.1 .Automatic threshold

After we get the eye region image, we change it from color image to gray image then use the automatic threshold algorithm to get the eye contour with only eyeball and eyelid. Here we calclate the histogram of eye let H be the index with max value in the histogram (the x-coordinate with biggest y value), we find that H*2/3 is a good threshold to segment the eye con-tour from the skin around the eye.

4.2. Fatigue detection

Through the above method, we can get the distance of eyelid. By analyzing the state and changing trend of eyelid distance, we can clearly decide whether driver is clear-headed, drowsy or even asleep

Eyelid distance is defined as the dis-tance from upper eyelid to lower eyelid. The eyelid distance is relatively large when driver is clear-minded, and it be-comes small when driver feel fatigue. Figure3 shows three difference eye states: open, half open and closed.

Fig. 3: a) open eye;c) closed eye

It is assumed that the driver is clear-minded at the beginning and the distance of the eyelid that time is re-garded as the normal value. The eyelid

distance is large in most time, with occa-sional small eyelid denotes eye blinking.We calculate moving average of eyelid distance in continuous 100 frames. When the moving average is below a threshold (like 60% of normal value), driver is judged to be fatigue with a warning is-used.

5 .Steering Wheel Handling

If the driver is fatigue obviously he

losses the stiffness of hand on the steering.

So, when the driver losses the stiffness on the steering, obviously the alarm sound is rung and the driver will be alerted.

  1. Conclusion

    The project surveillance system for automobiles has been successfully designed and tested. Integrating features of all the hardware components used are developed by us. Presence of every module has been reasoned out and placed carefully, thus contributing to the best working of the unit; an easy access for the citizens for safe driving is our main intension to develop this project. Secondly using highly advanced processor and with the help of growing technology the project has been successfully implemented. The technology should not stay up to the laboratory, which should be implemented in masses.

    BIBLIOGRAPHY:

    Apostoloff, N., & Zelinsky, A. (2003). Robust vision based lane tracking using multiple cues and particle filtering, University of Oxford, Oxford, UK.

    Ayoob, M. A., Grace, R., & Steinfeld, A. (2003). A user- centred drowsy-driver detection and warning system. Online paper. Available at http://www.aiga.org/resources/content/9/7/8/documents/a

    yoob.pdf.

    Bertozzi, M., & Broggi, A. (1996). Real- time lane and obstacle detection on the GOLD system. Proceedings of the 1996 IEEE Intelligent Vehicles Symposium, Tokyo, Japan, 19-20 September, pp. 213 218.

    Bertozzi, M., & Broggi, A. (1998). GOLD: a parallel real- time stereo vision system for generic obstacle and lane detection. IEEE Transactions on Image Processing, 7, 62-81.

    Boverie, S. (2004). Driver fatigue monitoring technologies and future ideas,

    Proceedings of the 2004 AWAKE Road Safety Workshop, Balocco, Italy, 29 September.

    Broggi, A. (1995). Parallel and local feature extraction: A real-time approach to road boundary detection. IEEE Transactions on Image Processing, 4, 217-223.

    Broggi, A. (1998). Vision-based driving assistance. IEEE Intelligent Transport Systems, 13, 22-23.

    REFERENCES:

    1. ^ OpenCV change

      logs: http://code.opencv.org/projects/opencv/wi ki/ChangeLog

    2. OpenCV Developer Site: http://code.opencv.org

    3. OpenCV User Site: http://opencv.org/

    4. OpenCV C interface: http://docs.opencv.org

    5. OpenCV: http://www.softintegration.com/produc ts/thirdparty/opencv/

    6. Cuda GPU

      port: http://opencv.org/platforms/cuda.html

    7. OpenCL

      Announcement: http://opencv.org/opencv-v2-4- 3rc-is-under-way.html

    8. OpenCL-accelerated Computer Vision API Reference: http://docs.opencv.org/module s/ocl/doc/ocl.html

My special thanks to

Dr.G.N.KODANDARAMAIAH, M.E.,Ph.D helping me

for

successfull completion of the project

Author: K.ABILASH KUMAR

Mr.K.ABILASH KUMAR,

Dept of ECE,

M.Tech II year,(Embedded sys), Kuppam engineering college.

Area of interest: Automotive electronics Mechatronics

Radar systems Medical electronics

GUIDE:

Mrs.R.SABARI BANU,M.Tech,(Ph.D),

Assistant professor, Dept. of ECE,

Kuppam engineering college, Kuppam.

R.Sabari Banu M.Tech .,(Ph.D) has completed

B.E at Annai Mathammal Sheela Engineering College, Namakkal got fifth Rank at the university level Examinations(2000-2004) Completed M.Tech at Dr.M.G.R University , Maduravoyal, Chennai (2006-2008)Pursuing Ph.D at Karpagam University in the field of Image Processing.

Leave a Reply