Eye Motion Detection Using Single Webcam for Person with Disabilities

DOI : 10.17577/IJERTV3IS031848

Download Full-Text PDF Cite this Publication

Eye Motion Detection Using Single Webcam for Person with Disabilities

Disha H. Nagpure

Student of Computer Technology, R.T.M.N.U

Nagpur,India-440022

Shubhangi T. Patil Lecturer of Computer Technology, R.T.M.N.U Nagpur,India-440009

Snehal P. Bujadkar

Student of Computer Technology, R.T.M.N.U

Nagpur,India-440022

Abstract–The handicap people with several (paralysis, joint pain) disabilities cannot enjoy the benefits provided by computer hence to facilitate those peoples, some controlling technique is required that can control through eye movement. The proposed system used a five stage algorithms to estimate the direction of eye movements to control computer system. This approach is efficient in terms of cost and accuracy

Keywords– face detection; eye detection; pupil detection; eye gaze detection; mouse controlling.

  1. INTRODUCTION

    Eye tracking is the process of measuring either the point of gaze (where one is looking) .Eye gaze detection technique expresses the interest of a user. An eye tracking system is a system that can track the movement of a users eye. The potential applications of eye tracking system widely range from drivers fatigue detection system to learning emotion monitoring system. Many accidents are due to driver inattention. Eye tracking is used in communication systems for disabled persons: allowing the user to speak, send e-mail, browse the Internet and perform other such activities, using only their eyes. The user having spectacles they can also use this system. The "Eye Mouse" system has been developed to provide computer access for people with severe disabilities. Eye Movement are often used in HCI (Human- Computer Interaction) studies involving disabled user, who can use only their eye for i/p. Eye control has already revolutionized the lives of thousands of people with disabilities. The system tracks the computer user's movements with a camera and translates them into the movements of the mouse pointer on the screen.

    An eye tracking & eye movement-based interaction using computer vision techniques have the potential to becomean important component in future perceptual user interfaces. So by this motivation designing a real-time eye tracking software compatible with a standard PC environment.

  2. BACKGROUND HISTORY

    Previously number of approaches has been used to create a system using the human system interaction. That included various methods which include head movement detection and hand gesture detection techniques.

    Head movement detection includes recognizing the head portion from image and extracting the associated information about the coordinates from the same. Series of such processing gives you resultant data that can be used for synchronizing head and

    computer synchronization. Viola-Jones object detection algorithm

    1. is mainly used to for head detection. Object detection is the process of finding instances of real-world object such as faces, bicycles etc. Object detection algorithm use extracted features and learning to recognize instances of an object of that category.

      Figure: 1 Viola-Jones object detection algorithm.

      Further improvement was made by introducing gesture detection instead of head movement which was more accurate and wiser applicable approach rather than head movement. Hand gesture detection was the major part which was and is being used for system Control. 3d model based algorithm, skeleton based algorithm [2] and appearance based algorithm are commonly used algorithm for hand gesture detection. In skeleton based algorithm

    2. joint angle parameters along with segment lengths are used to depict virtual skeleton of the person is computed and parts of the body are mapped to certain segment.

      Figure: 2 Virtual Hand skeleton mapped using Skeleton based algorithm.

      The 3D model approach use volumetric or skeletal models, or combination of the two. Volumetric approaches are heavily used in computer animation industry and for computer vision purposes. The models generate the complex 3D surfaces. The drawback of this method is that is very computational intensive and systems for live analyses are still to be developed.

      Figure: 3 Hand mapped using 3D based modeling algorithm

      Recent years more emphasis is given on eye detection and retina detection. The major algorithm proposed use the concept of static images and motion based tracking. Algorithm that has been concentrating on static images used the first image as base and other images as the reference for the detection the direction of motion. This aims the segmentation of the area belonging to eye by low intensity area by converting image into rescale and intensity of pixels inside the segment. In algorithm that is based on motion, there is requirement of several frames which are used to find the frame difference. Frame difference is the technique in which two frames are subtracted to assume whether there is any motion or the direction of the motion. It is a method that uses localization of key points that characterize the eye and estimate its position by referencing next frames. Another way of detecting eyes are based on the templates that are learned from the set of training images which should be captured and that is used to variably represent the eye appearance. Its major drawback is the huge requirement of huge training set which needs to be prepared in advance.

  3. PROPOSED METHODOLOGY

    Figure: 4Five stage algorithm for proposed eye tracking system

      1. Face Detection

        The color of human skin is a distinct feature for face detection. Several different approaches to skin-color- based face detection systems have been proposed.[6].Evaluating the skin color statistics, it is expected that face color tones will be distributed over a discriminate space in the color space. Many face detection systems are based on one of the RGB, YCbCr, and HSV color space. In our system, we adopted the YUV-based skin color detection method proposed. [6]Cyborgcolour space has been defined in response to increasing demands for digital algorithms in handling video information, and has since become a widely used model in a digital video. It belongs to the family of television transmission color spaces. The family includes others such as YUV and YIQ. Cyborg is a digital color system, while YUV and YIQ are analogy spaces for the respective PAL and NTSC systems. This color spaces separate RGB (Red-Green-Blue) into luminance and chrominance information and are useful in compression applications however the Specification of colors is somewhat unintuitive [3]. In our system, we adopted the YUV-based skin color detection method proposed in [6] .Using this technique we detect a face region

      2. Eye Detection

        A human eye is an organ that senses light. An image of eye captured by digital camera is shown in Fig.

        Figure: 5 Eye Structure

      3. Pupil Detection

        After finding the particular left eye portion, we find the contours that may contain the probable iris boundary [7]. To initiate, first the contours are determined from eye boundaries of binary image. After analyzing the number of elements in images we select the portion whose contours contain number of elements higher than a certain amount. This portion we get after this contains the iris boundary.

        Figure6. Result of finding contours.

        It is quiet a simple process now to find the boundary of iris as it is generally observed as a circle. Hashwanter et al. showed that the process modeling the iris boundary as circle is roust and errors proportion is very less .Now we model the boundary of iris as circle and extract three parameters. Centre of iris (xc,yc ) and radius rmin < r <rmax which present the following circle model.

        Figure7.Iris boundary model

        Figure8.Flow chart of matching iris boundary model

      4. Eye-Gaze Detection

        In the Eye-Gaze detection using Geometry features extraction it includes several different approaches, using their strengths to zero in on the eye position. A webcam is used to capture the users face continuously. The captured image goes through a series of transformation, at the end of which we get a binary image in which the pupil of the eye are clearly highlighted (being white), [9] the region around the pupils-eye are black, while the face is, again white. Once pupil detection is done, we get the co-ordinates of the pupil in the image. An efficient approach for real-time eye-gaze detection from images acquired from a web camera.

        First, the image is binaries with a dynamic threshold. The geometry features of the eye image are extracted from binary image. Next using estimation method based on geometry structure of eye, we detect the position of two eye corners. After that, the centre of iris is detected by matching between an iris bound model

        & image contours.[8] Finally, using the relation position information between the centre of iris & the eye corners ,base on the relationship between image co-ordinate, the position where the eye is looking at the monitor is calculated.

        Eye movements are recorded by a device called an eye tracker, which reports raw eye positional data at specified sampling frequency. Main characteristics of the eye tracking equipment are:

        a) positional accuracy the difference between the reported and the actual gaze point, b) precision minimum amount of gaze shift detectable by the equipment, c) Sampling frequency. Detailed survey of the different eye tracking approaches can be found here. Currently, two main metrics are widely accepted as quality indicators of captured raw eye positional data: calibration error and data loss.

        at an optimal distance from the image sensor (usually 40-70cm.). However, modern advances in the table and head mounted eye tracking systems allow to collect acceptable data quality when the head is not fixated. Classification of Captured Gaze Data Into Fixations and Saccades Raw eye positional signal should be classified into fixations and saccades (and other eye movement types when stimulus properties are likely to invoke them) in cases when it is necessary to assess performance of the Human Visual System(HVS) or to employ eye movement characteristics for biometric purposes. Several algorithms exist for classification purposes. Classification of the raw gaze data into fixation and saccades also allows assessing the meaningfulness of the captured data via behavior scores that are discussed next.

      5. Mouse Controlling

    The last stage is to associate the movement of the cursor with the proportional eye movement. As there is very small scope for the movement of eye. Proportional change in the position of the cursor is essential. This can be done by setting the default movement of cursor with respect to the proportional movement of the eye. It is important to restrict too much movement of cursor as it may lead to unsatisfactory results while positioning cursor on specific position for a period. So it is very important to measure the relevance and implement the appropriate eye cursor positioning ratio.

  4. EXPERIMENTAL RESULTS

    Calibration error is determined during a calibration procedure. Calibration procedure is very important in the eye tracking research. Its aim is to train eye tracking software to estimate eye gaze position for every eye image captured by the image sensor. This aim is gained by a presentation of pre-set target points that are usually uniformly distributed on the visual screen and requesting a subject to look at these predefined gaze locations. Subsequently, specifically when recording takes place, the gaze locations that fall outside of the initial calibration points are computing by various algorithms. Calibration error indicates the average positional difference between the coordinates of the pre-set calibration points and the coordinates of the estimated gaze locations for those points. Calibration error varies depending on a subject and experiment setup. [8]Calibration error is expected to be close to equipments positional accuracy; however, quite often the calibration error can be several times larger than positional accuracy reported by an eye-tracking vendor. Please note that calibration error can be also termed as accuracy or positional accuracy in the eye tracking literature.

    Data loss is the amount of gaze samples reported as invalid by an eye tracker. Data loss is usually caused by blinking, head movements, changes in stimulus or surrounding lighting, and squinting. In cases when gaze points fall outside of the recording boundaries (e.g., computer screen), which usually happens due to poor calibration, these gaze points are marked as invalid. Please note that not all eye trackers are capable of marking invalid gaze points. In such cases invalid gaze points should be found and marked by the experimenter to compute resulting data loss. Usually smallest calibration error and data loss for the eye tracking systems are achieved when subjects head is fixated by a chin rest

    Fig9 .Main form

    Fig10.Captured image

    Fig11.Red region Detection

    Fig12.Game playing using eye motion

  5. CONCLUSIONS

    In this paper, an eye motion based on low-cost eye tracking system is presented. In this system we proposed five stage algorithms. The user with several disabilities can use this system for handling computer. A real time eye motion detection technique is presented. We proposed an eye-gaze detection algorithm using image captured by a single web camera attached to the system. By using mouse controlling the various games are played like chess and cards playing by associate the movement of the cursor with the proportional eye movement.

  6. FUTURE WORK

In future, the performance of proposed method will be improve and will be extended for physically challenged people being able to control their household appliances. The technology used in this project could be extended to control an electric wheel-chair by just looking in the direction in which they want it to go.

REFERENCES

  1. Qian li, Sophia Antipolis, Niaz an improved algorithm on viola-jones object detector, content-based multimedia indexing, 2012 and conference location Annecy.

  2. Gang Xu,Yuqing Lei A new image recognition algorithm based on skeleton neural networks, 2008 IEEE international joint conference.

  3. mu-chun su, kuo-chung wang, gwo-dong Chen an eye tracking system and its application inaids for people with severe disabilities, vol. 18 no. 6 December 2006.

  4. Surashree kulkarni1, sagar gala2, a simple algorithm for eye detection and cursor control, ijcta | nov-dec 2013, ISSN: 2229-6093.

  5. Qiong wang, jingyu yang, eye detection in facial images with unconstrained background, journal of pattern recognition research 1 (2006) 55-62, china, September 2006.

  6. r. c. k. hua, l. c. d. Silva and p. vadakkepat : detection and tracking of faces in real environments. The 2002 international conference on imaging science, systems, and technology (cisst02) USA, 2002

  7. R. Stiefelhagen, J.Yang and A.Waibel, Tracking eyes and monitoring eye gaze, Prociding of the workshop on Perceptual user interfaces, 1997.

  8. Nguyen Huu Cuong & Hunynh HoangEye-Gaze Detection with a single webcam based on Geometry Features Extraction December 2010.

  9. Surashree Kulkarni & Sagar GalaA Simple Algorithm for Eye Detection and Cursor ControlIJCTA nov-dec2013.

Leave a Reply