Gaze Detection Monitoring System

DOI : 10.17577/IJERTV8IS060439

Download Full-Text PDF Cite this Publication

Text Only Version

Gaze Detection Monitoring System

Aakanksha Mathur

MTECH MPC (CSE),

IGDTUW, Delhi

Mr. Indra Thanaya

Assistant Professor, CSE dept., IGDTUW, Delhi

Abstract The eye/gaze detection has been the subject of a great number of studies, with applications in very different fields including medical research for oculography determination, car drivers behavior characterization, and even to build computer interfaces for handicapped people. Systems based on different technologies have been implemented, in many cases with some devices mounted on the persons head. Also, non-intrusive systems have been developed and are being used in many applications. My aim is to classify such a system that will not just benefit the needy but also, will be cost effective so that it can be afforded by all the levels of the society. Herein, a custom gaze point detection HW/SW system intended to allow the mobility hindered population the ability to control selection in a computer

interface via gaze is presented.

Keywords Component; formatting; style; styling; insert (key words)

  1. INTRODUCTION

    The eye/gaze detection has been the subject of a great number of studies, with applications in very different fields including medical research for oculography determination, car drivers behavior characterization, and even to build computer interfaces for handicapped people. Eye Gaze Tracking is a technique that involves tracking a person's eye gaze using either contact and invasive hardware, or expensive and non- standard hardware. Estimating gaze point at a given time can provide a host of information pertaining to intention and perception of a scene. This information can be applied toward enhancement of Human-Computer Interaction (HCI), making such interfaces smoother and more efficient. For the population with limited or no mobility, a gaze point estimation system that accurately selects components of a computer application is extremely beneficial. Herein, a custom gaze point detection HW/SW system intended to allow the mobility hindered population the ability to control selection in a computer interface via gaze is presented. Eye Gaze Tracking is a technique that involves tracking a person's eye gaze using either contact and invasive hardware, or expensive and non- standard hardware. Estimating gaze point at a given time can provide a host of information pertaining to intention and perception of a scene. This information can be applied toward enhancement of Human-Computer Interaction (HCI), making such interfaces smoother and more efficient. For the population with limited or no mobility, a gaze point estimation system that accurately selects components of a computer application is extremely beneficial.

    In order to be used in a computer interface as an input device, a robust eye gaze tracking system should fulfil the following requirements – be highly precise and accurate; be non-intrusive; be immune to occlusions like blinks; be immune

    to changes in head displacement; be tolerant of changes in lighting; and, be able to track continuously and in real-time.

    Herein, a custom gaze point detection HW/SW system intended to allow the mobility hindered population the ability to control selection in a computer interface via gaze is presented.

  2. LITERATURE SURVEY

    Circular Hough transform was used to detect iris location in the method by Amer AlRahayfeh [1]. This paper proposes rotation of face images to zero degrees (no rotation position)which was failed to be performed by Viola Jones algorithm. Video-oculography systems obtain information from one or more cameras to estimate the gaze detection [2].

    In the paper [3], the calibration process was conducted by using two different approaches of Viola-Jones for detection of the eye features while Neural Network is applied for training and classifying the raw eye data. ElectroOculography [4] is useful for the paralysed people that has add on features such as tilt detection, obstacle sensing, avoidance and path re-routing. The design is majorly composed of an embedded control module that processes bio-electric signal generated by the eye movements to actuate real world events. The paper referred in [5], provides a detailed review about gesture recognition, emphasising on head gesture and facial expression. The system is offered the ability to identify and interpret gestures to control the devices. Pupil extraction is performed using a scanning method that detects any neighbouring black pixel, thereby, scanning the pupil [6]. The problem of automatic target classification and/or Through-the-wall radar imaging (TWRI) is dealt with in [7]. The authors have proposed a scheme that will consider the objects that are stationary enclosed within a structure and will work on the SAR image instead of the raw data. It composes of segmentation, feature extraction based on super-quadrics and classification. Contact Based Image Retrieval (CBIR) [8] techniques are gaining infinite amount of significance that includes the most influential step as feature extraction and is used in Image recognition in large image database.

  3. METHODOLOGY

    The gaze detection monitoring system that I am trying to develop here is to benefit the disabled who cannot move any of their body parts but their eyes. For such users, eye movement and blinking is the only way to communicate with the outside world. This research aims in developing a system that can aid the physically challenged by allowing them to interact with a computer system using only their eyes. The

    hardware part composes of a prototype of the wheel chair assembly (using a Servo motor) which will move in a desired direction according to the instructions it will receive from MATLAB based software which will capture the eye movements of the patients and accordingly move the motor. A webcam or the in-built camera of the laptop plays a major role in capturing the image of the users face. The image is then further processed in the PC or Laptop which undergoes a serial interface with the communication port of the micro-controller. In our case, we are using ATMega128 Arduino Uno Micro- controller in order to reduce the overall cost of the system, thus making it more effective and easy to buy for all the levels of the society.

    The image captured is first analysed by the system in order to know the point of gaze of the person. Accordingly, the micro-controller gives serial instructions to the motor to move in that specific direction. In the software part, I aim to design a software flow on MATLAB IDE where in the instructions to the motor will be given. The image of eyes will be constantly captured and according to the direction in which the eye moves, the motor will move. Here is the algorithm that I am using in my system.

    Figure 1. Algorithm

    We have worked on hair feature extraction which is a stage of Viola Jones algorithm. Hough transform is a feature extraction technique that detected the face and then further processes it to detect the pupil location.

    1. Simulation Tools

      For development of this system, we need two softwares, one is MATLAB IDE and Arduino IDE. MATLAB is used for working on the image processing concepts. The major software flow diagram is implemented on this IDE.

      Arduino IDE is used for controlling the sensors, as we are using ATMega 128 micro-controller. MATLAB provides in- built package for Arduino hardware that makes it compatible with the same.

  4. RESULT

The servo motor rotates only when the software detects the direction of gaze to be in left, right or straight irection. The last figure is that of the hardware part. From the above figures of the result, we can see that first, a full image is captured of the user through the built-in camera which is later cropped to only focus in the face of the person. Then, only eyes are cropped out of the image and focussed on. The direction of the gaze determines which png image should be displayed and whether the motor has to rotate or not.

Figure 2. Hardware

This is how the hardware looks like. The servo motor is connected to the board through jumpers and the board is connected to the laptop through a usb.

Figure 3. Output

We can see above that when the users eyes stare in left direction, an image is displayed to show us the direction and simultaneously, the motor starts to rotate.

Similarly is the case when the pupil is detected in left or straight direction.

CONCLUSION

Gaze Detection Monitoring system has been developed in order to ease out the lives of the differently disabled. No matter how much we do, it will be less for the less privileged ones. Hence, the development of this system is just a small contribution from me to make these people liv their lifestyle according to their freedom, without being dependent on any other individual. I have tried to enhance the speed of pupil detection, thus reducing the delay in eye detection of the person.

The cost factor has also been taken into consideration, so that it can be afforded by the lower strata of the society as well. For that purpose, we have used Arduino ATMega 128 micro- controller, which is less costly than raspberry pi micro- controller. Moreover, the user is saves time in software installation. Arduino IDE is freely available. However, for raspberry pi, the entire software for the same, that only works on a linux machine is not just difficult to install, but also, gives a headache to the user using it. A system can never be totally flawless. There can be numerous changes in future in order to enhance the product and make it more common among the people such as adding obstacle detection in order to make the system more stable and safe, and many more.

REFERENCES

    1. A.Rahayfeh and Faezipour.M, Application of head flexion detection for enhancing eye gaze direction classification, International IEEE EMBS Conference, pp. 966- 969, August 2015.

    2. Dan Witzner Hansen and Qiang Ji, In the Eye of the Beholder: A Survey of Models for Eyes and Gaze, IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 478 500, 2010.

    3. Farah Nadia Ibrahim ; Zalhan Mohd Zin ; Norazlin Ibrahim, Eye Center Detection Using Combined Viola-Jones and Neural Network Algorithms, International Symposium on Agent, Multi-Agent Systems and Robotics (ISAMSR), pp. 1-6, 2018.

    4. Dr. Padmaja K.V, Eye-Gesture Controlled Intelligent Wheelchair using Electro-Oculography, International Symposium on Circuits and Systems (ISCAS), pp. 2065-2068, 2014.

    5. Rushikesh T. Bankar and Suresh S. Salankar, Head Gesture Recognition using Gesture Cam, 2015 5th International Conference on Communication Systems and Network Technologies, pp. 535-538, 2015.

    6. Hanaa Mohsin and Salma Hameedi Abdullah, Pupil Detection Algorithm based on Feature Extraction for Eye Gaze, 2017 6th International Conference on Information and Communication Technology and Accessibility, pp. 1-4, 2017.

    7. Christian Debes ; Jürgen Hahn ; Abdelhak M. Zoubir ; Moeness G. Amin, Feature Extraction in Through-the-Wall radar imaging", 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3562-3565, 2010.

    8. R. Kachouri ; K. Djemal ; H. Maaref ; D. Sellami Masmoudi ; N. Derbel, Feature extraction and relevance evaluation for heterogeneous image database recognition, 2008 First Workshops on Image Processing Theory, Tools and Applications, pp. 1-6, 2008.

Leave a Reply