Ocular Movement based Assistance for Catatonic Patients

The objective of this work is to provide assistance to the catatonic patients. Catatonic patients are those who hold symptoms that usually involve a lack of movement and face difficulty in communicating. On an average 1 person in 10 who has a severe mental illness can lead to catatonia at some point. Hence to overcome this problem we present the method of human eye detection and its movement tracking using dlib libraries. An iris tracker has the ability to detect the presence, attention and focus of the user. This allows for unique insights into human behaviour and facilitates natural user interfaces in a broad range of devices. The aim of this project is to develop and implement a system consisting of iris-tracking method by processing the image frames captured. After completion of the iris tracking, the processor generates the code word corresponding to the predefined movements and sends further instructions it to the device controller. The device controller establishes the communication with other devices to implement the received commands. This interface assists the Catatonic patient to control various appliances without applying any physical effort. Hence this reduces the dependency of catatonic


INTRODUCTION
Catatonia is a disease which makes the patient totally dependent on others for their daily needs. Catatonia is a state in which parts of human body becomes static. There are many causes to this disease like weakening of muscles, injury in the nervous system and may be some serious illness. This results in the total dependence of patients on others for their basic needs like controlling the fan speed, calling someone for help etc. Hence to overcome these problems we need a system which can provide an aid to these patients.
We intend to make a low-cost iris tracking system which uses the concept of BGR to gray scale image processing. The movement of eye will control the fan, lights, temperature and also give a panic call when needed. The iris movements are recognized on the basis of captured real time images. The pattern of the eye ball will be analyzed to get the accurate direction. The proper arrangement of checkpoints and continuous captured frames enable us to reduce the incorrect detection scenario. The focus and attention of a person is captured in iris tracking systems.
The different position of iris corresponds to different code words which will be used for further automation. To increase the flexibility of our project we have applied the concept of IoT which rules out the range limitation of wired or other wireless (Bluetooth, ZigBee transmission) communication system. The code word will be sent over the cloud which will be accessed by device controller at output end. The device controller will further use these codes to access the surrounding devices to change the current status. This iris tracking method will make impaired patient self-efficient to control their surroundings and even inform doctor when they feel uneasy via panic call.

II.
LITERATURE REVIEW The main motive of this study is to analyse the iris movement and to provide spontaneous and efficient control that uses Hands-free operation for disabled patients. Embedded systems are always better to use as they abolish the need of computer system and also has low power requirements. There are several methods to track and analyse the iris movement. The system we have made helps the catatonic patients to control distributed home appliances spontaneously and without applying any physical efforts.
There are already many common eye tracking systems implemented their controlled environment make them expensive. Xiahou et.al. [1] presented algorithms and proposed models for the purpose of iris tracking. Anjith et.al. [3] proposes different aspects and properties of iris as well as pupil movement of human eye. Ajay Bhasker et.al. [2] presented a paper in which a guiding system was presented on the basis of eye tracking with the help of Zigbee technology.
Abdur Rahim Biswas et.al. [4] in the world of IoT, we need a service provision which has characteristics like reliability, good performance, efficiency and scalability. In order to fulfill these attributes, future firms and research vision has to combine the concepts of cloud computing and IoT. Shopen Dey et.al. [5] presented the model of home automation and how it can be implemented using IoT. Bousten et.al. [6] presented a control method of integration of household appliances such as air conditioner, electric lights and fan with the help of relays and electric switches. Fig. 1: Block-diagram of the proposed system 1) Block 1 represents the Image Acquisition Device, specifically the NoIR Raspberry Pi Camera. This device captures videos and images in the form of frames. It will be used to detect the position of the iris which will be translated to the function that is to be utilized. 2) Block 2 represents the Processor that is employed which is the Raspberry Pi 3B. It reads the image captured by camera and converts the BGR Image into grayscale. Moreover, it enhances the picture quality by equalizing and modifying picture parameters such as contrast, brightness and gamma required for further processing. 3) Block 3 represents the cloud service being employed namely ThingSpeak which is an open-source Internet of Things (IoT) application and API to store and retrieve data from things using the HTTP and MQTT protocol over the Internet or via a Local Area Network. ThingSpeak enables the creation of sensor logging applications, location tracking applications, and a social network of things with status updates.

4) Block 4 represents the NodeMCU ESP8266 which is an
open source IoT platform. It includes firmware which runs on the ESP8266 Wi-Fi SoC from Espressif Systems, and hardware which is based on the ESP-12 module. This communicates with ThingSpeak to accept and relay the data processed onto the next blocks ahead. 5) Block 5 represents the switch panel that is used to "make" or "break" the following electrical circuit, interrupting the current or diverting it from one conductor to another. The mechanism of the switch removes or restores the conducting path in a circuit when it is operated. The NodeMCU receives the data (codeword) via ThingSpeak and switches the switching panel which in turn controls the switching ON and OFF of the connected appliances. 6) Block 6 represents the GSM SIM800 Module. SIM800 module is preferred for Bluetooth/FM functionality over SIM900 series. It is compact in size when compared to the GSM SIM900 series. The GSM SIM800 module supports Quad-band 850/900/1800/1900 MHz, it can transmit Voice, SMS and data information with low power consumption. GSM module interfaced with NodeMCU transmits alert messages to the registered persons. 7) Block 7 represents the various appliances such as lights, fans etc. that will be controlled at the end via the entire system.

IV. METHODOLOGY A. Code Word Processing
According to the iris pattern we have assigned corresponding code word. This code word will be sent to the ThingSpeak platform and the device controller will fetch the corresponding code word and process it. Hence the device controller manages all the surrounding devices.

B. Wireless Communication using Cloud Platform
Cloud is the required availability for the data storage and also processing and manipulating data. It has many merits like its maintenance is easy, it has large productivity as many users can work on same data simultaneously. For this we are using ThingSpeak platform. A Wi-Fi module is inbuilt in raspberry pi. After the processing, the code word is uploaded through this Wi-Fi module. Then the device controller retrieves the code word from the same server.

C. Automation through Device Controller
We are using NodeMCU for the automation of the appliances. The NodeMCU collects the codeword with the help of ThingSpeak and gives the command to control the switching of the appliances like fan, bulb, and air conditioner. For the interfacing we have used relays which control the working of appliances. The GSM module is also interfaced with the device controller and sends the message according to the iris movement of the patient.

V. IMPLEMENTATION A. Raspberry Pi Camera Module:
The Raspberry Pi camera module (5MP) is a creation of Raspberry Pi foundation. This camera module of 5MP was launched in the year 2013. It has both visible light and infrared versions. Its pixel size is 1.4 µm × 1.4 µm, sensitivity -680 mV/lux-sec and sensor resolution is 2592 × 1944 pixels.

B. Raspberry Pi 3B Model Processor:
Libraries available for Raspberry Pi make it more efficient as compared to other processors. It has 1 GB RAM and BCM43143 WiFi on board. It also consists of micro SD port for loading the operating system and also storing data. Raspberry pi 3B model is said to be better than raspberry pi 2B model as it featured Bluetooth connectivity.

C. GSM SIM800 Module:
GSM Sim800 is considered to be more efficient than GSM Sim900 for its Bluetooth/FM functionality and compact size. The GSM sim800 module supports Quad-band 850/900/1800/1900 MHz which helps in the transmission of voice, SMS, and data information with low power consumption. The interfacing of GSM module with device controller gives the alert messages to the enrolled person.

A. Iris Detection: 1) Without Face:
It directly detects the eye without any algorithm for face detection. The advantage is that whenever the face is turned, it can easily detect the eye as it only concentrates on eye not on face. Time required for detection is also reduced. However, it can capture a false image as eye which can make difficult for our system to operate.
2) With Face: It detects the face in the image and then the algorithm will search for the eye inside the face. It is said to be a proper method. Sometimes false object is also detected which lies outside the face and can be falsely declared as iris. Therefore, detecting iris inside the face provides more efficiency. Due to the turned face, face might not get detected.

B. Iris Tracking:
We have detected the face and the exact coordinates of the eye. The dlib in openCV will allow us to detect facial and eye coordinates. First, we have loaded the face detector and landmark predictor to detect the coordinates then we created a function in which we simplify and put the coordinates of two point that will return the medium location. All the frame capture will happen in real time. With the help of face landmark detector, we will get different specific index position from which we will extract location of both left and right. To reduce the processing time, we can keep track of only one eye because both eyes movement are synchronized. We must ensure that we take rectangle shape for the image so we have to take all the extreme points of the eye.
Once the detection is complete, we assign different code words to both left and right position of the eye in different field. Those codewords are uploaded in the channel created on ThingSpeak platform. We can assign value on the field using API request write channel feed. These values are used by NodeMCU which is used as device controller. NodeMCU can access cloud by including header files and libraries and mention channel name.
include<ThingSpeak.h> The codeword are used further to switch ON/OFF the electric appliances and GSM module is used for panic call, activated by chaotic movement of the iris which will allow doctor to regularly monitor a patient's condition. SUMMARY In this paper, we have presented a system based on iris detection and tracking of its movement using classifiers and filters. This system helps in the recognition of fragments of image that emphasize on parts of human eye such as iris and pupil. In order to enable an efficient and a thorough analysis of the project, we have divided it into three sections: 1) The first section comprises of the image acquisition technique which will be facilitated by the Raspberry Pi NoIR camera. 2) The second section comprises of the image processing done by the Raspberry Pi 3B and dlib libraries followed by sending the interpreted commands onto the cloud.
3) The third section comprises of the output stage consisting of the room automation done by the received commands through the Node MCU. The merit of having iris detection instead of face detection is that it is handy and does not require training. Since the facial detection is not used in the project any unauthorized person other than the patient can give fake command.

VIII.
FUTURE SCOPE • This project can be made on larger scale where we can provide assistance to multiple patients at a time using same resources. • The system should be efficient enough to recognize the difference between the patient and other people present in the area of camera footprint. • In order to examine and control the working of the system we may use Graphical User Interface (GUI).