Neuro-Cam

DOI : 10.17577/IJERTCONV6IS13104

Download Full-Text PDF Cite this Publication

Text Only Version

Neuro-Cam

A. Nikshep Prasad

Electronics and Communication Rajarajeswari College of Engineering Bangalore, India

Akshaykumar V. M.

Electronics and Communication Rajarajeswari College of Engineering Bangalore, India

Asma Ahmed

Electronics and Communication Rajarajeswari College of Engineering Bangalore, India

Ayush Yadav

Electronics and Communication Rajarajeswari College of Engineering Bangalore, India

Abstract—NEURO-CAM is based on Image Processing, TTS (Text-to-Speech) and BCI (Brain to Computer Interface). This project is mainly targeted towards blind and physically disabled people. Image Processing is used for various applications like color detection, face recognition, object detection and optical character recognition. Text-to-Speech is used for converting the processed image output to speech i.e. audio output. Brain to Computer Interface is to control the home appliances like wheel chair, lights and fans for the ease of physically disabled.

Keywords Image processing, text to speech, brain to computer interface, raspberry pi, opencv, python, arduino, bluetooth, neurosky mindwave, GSM, GPRS.

  1. INTRODUCTION

    The existing technology is in the form of spectacles and has the capability of detecting colors, reading of text, face detection, money note identification and identifying particular objects and reading it out to the user and the Brain to Computer Interface (BCI) module is used to extract the Electroencephalography (EEG) signals from brain to execute the specified operation.

    Now for the better understanding of Neuro-Cam, lets classify this into two parts, the V-Eye and the BCI.

    The Virtual-Eye i.e., V-Eye is an artificial vision device for the blind and the partially sighted people which helps them to recognize the face, objects, colour, money, read out current date and time and even read text from any surface with the help of a camera. The device can be used by anyone and is similar to a normal spectacles but has integrated camera and circuitry. Thus it acts as a complete assistant for the blind and notifies the actions/things happening around in his/her surroundings. For a backup, a switch is placed which would share the current location of the user to the specified contacts. A Brain Computer Interface (BCI) provides a direct communication pathway between a human brain and an

    external device. In this, the communication can take place in two ways i.e. BCI either accept commands from the brain or vice-versa. Two-way BCIs would allow brains and external devices to exchange information in both directions but yet to be successfully implemented in animals or humans. In our project we deal with communication that involves

    transmitting commands from the brain to computer. This can be done using appropriate neural sensors and data processing algorithms.

    In this context, we are using a mono electrode which is used to receive the EEG signals from brain. The BRAINSENSE headband is used which has a one electrode placed in the center of the band that has to be placed on the forehead. Here we are mainly deal with the level of concentration and how focused our brain is.

    Depending upon the level of the concentration we receive different reading in the serial monitor window. Now these pulses i.e. the range of reading are specified and assigned for a particular activity to take place. We have tried on some of the home appliances like lights, fans and also on a car which can be used as a wheelchair or any desired application.

    Figure i: Block Diagram of Neuro-Cam

    Figure ii: NEURO-CAM

  2. LITERATURE SURVEY

    • Brain Computer Interface: Design and Development of a Smart Robotic Gripper for a Prosthesis Environment.

      Brain Computer Interface (BCI) technology is a tremendously growing research area having various application. Its involvement in medical field lies with prevention and neuronal rehabilitation for severe injuries. BCI systems try to build a pathway between the human brain and external devices, providing the patients a media for communication with the world thus eliminating the scope of depending on others to a less degree. BCIs are often oriented at research, mapping, assisting and augmenting of human cognitive or sensory-motor functions. The growth of BCIs has increased in the recent years, paving way for research and aiming to be more accessible for the people. This paper describes a method to control a prosthetic moving robotic gripper with the use of brain waves. Brain waves, detected using Neurosky Mindwave Mobile headset, are processed, and sent to the microcontroller for controlling real time application. Thus, this system provides a method for the Amyotrophic Laterals Sclerosis (ALS) patients, or people who have little or no reliable muscle movement, to control the prosthetic gripper and lead a self- reliant life to some extent.

    • Learning Platform for Visually Impaired Children through Artificial Intelligence and Computer Vision.

    The topic Visual Disabilities and Computer Vision are the most researched topics of recent years. Researchers have been trying to combine two topics to create most usable systems to the visually disabled to aid them in their day to day tasks. In this research, we are trying to create an application which is targeting children between the ages of 6 – 14 who suffers from visual disabilities to aid them in their primary learning task of learning to identify objects without a supervision of a third-party. We are trying to achieve this task by combining latest advancements of Computer Vision and Artificial Intelligence technologies by using Deep Region Based Convolutional Networks (R-CNN), Recurrent Neural Networks (RNN) and Speech models to provide an interactive learning experience to such individuals.

  3. METHODOLOGY

    V-Eye:

    The Camera is used to capture the view and the data is sent to a processor where image is been processed to detect the captured object or text and identify it. Then the processed data is converted to audio signal by a software named TTS (Text-to-Speech) and fed to speaker or earphone.

    This is accomplished by using an open platform, OpenCV (Open Computer Vision) on the Raspberry Pi running Raspbian OS and the script used for coding is python.

    Image Processing is used to process the image and identify face, colour, texts, money notes and obstacles with the help of the night vision camera which also enables it for use in the dimly lit areas.

    We can specify in the software to identify a particular colour/button which can be used while reading text from a book or screen. When the particular specified colour/button is detected it captures the image of the book or screen and then the TTS module is used to convert the text in the image to speech output for the user.

    Figure iii: Block Diagram of V-Eye

    Figure iv: Flow Diagram of V-Eye

    The flow diagram explains the fundamental steps where the camera being the first interfaced module captures the image and sends to the next block where the information is being extracted.

    A quick comparison is made to check if the new image captured is the same as the previous image or not. If the captured image is new then it is processed by using a suitable micro-controller. Libraries like OCR, TTS, and Haar Cascade are used for processing the images in OpenCV. The TTS module is then used to convert the extracted data from the image into the speech output.

    BCI (Brain to Computer Interface):

    In our project BCI, brain command signal from the human is sensed and appropriate actiity is assigned. We will design a band on which EEG neuro sensors are placed. EEG neuro sensor measures the voltage fluctuations resulting from ionic current within the neurons of the brain.

    Every action of a human is controlled by the brain i.e. neuron cells, thus electrodes placed on the head will detect unique pulse pattern generated for each thought.

    Figure v: Block Diagram of BCI

    The pluguino micro-controller is programmed to receive the pulses from the BCI headband through a Bluetooth module. The received readings are now observed on the Serial Monitor window and a specified range will be assigned to a particular application/task. Thus controlling the lights, fans, electric wheelchair, prosthetic limbs and other appliance will be easier to access for the physically challenged person. A backup button is also placed so that at emergency or trouble the user can share his/her location to the concerned person and ask for help.

    Figure vi: Flow Diagram of BCI

    The Brain signals are extracted by an electrode in the brain sensor flux band. These signals are then converted into digital form and sent to the micro-controller. The micro-controller is programmed to perform various activities depending on the

    level of concentration and attention value of the brain. Higher the attention, greater will be the reading. Thus different ranges are assigned to different activity to take place. Hence the electric and electronic appliances are controlled easily, which improves the standard and quality of living.

  4. RESULT

    These results were obtained in real time on the Raspberry Pi.

      1. Face Recognition:

        For the face recognition, at first the face data has to be fed into the system, this is done by a program which makes a database by capturing the face of the person and converts it to grayscale and saves with the respective ID (about 50 images were captured).This uses the Haar Cascade Frontal Face file for detecting the face. Then a trainer program is used to extract the values from the captured images. Finally, the face recognition program is run which opens up the camera window and shows the faces being recognized as shown by the program with comparison to the trained file. Then the recognized face names are read out.

        Figure vii: Face being recognized

      2. Object Detection:

        For Object, first the database is generated and the names are assigned to them and the system is trained. Then the program detects and recognizes the trained objects as shown. Then the objects are read out.

        Figure viii: Object being recognized

      3. Colour Identification:

        In colour detection the range of the colours were specified in the program and when it was executed the colours were detected. The detected colours are converted to speech.

        Figure ix: Color of the balls being recognized

      4. Date and Time:

        The program is written such that it fetches the current date, time and month of the system and reads it out to the user using the TTS module.

        Figure xi: The first image shows the captured image which contains text. The second shows the detected characters after the processing.

        And at the last the detected text is read out.

        Figure x: The date and time output of the system

      5. Optical Character Recognition:

        When the program is executed the camera captures an image and then uses a pytesseract module which converts an image to text if any found in the image. Then the text detected is read out by a TTS module through a speaker or earphone connected.

      6. BCI

        The program dumped in the pluguino receives the signal from the brain sensor and accordingly moves the car in all four directions.

        Figure xii: The Brain sensor band and the Pluguino powered car

  5. FUTURE SCOPE

    The V-Eye and BCI can be improvised in many ways.

    The V-Eye can be added with more features like distance calculation, prioritizing the detected objects, different font support can be implemented for the OCR, the processing can

    be made faster and gestures can be added for executing different tasks.

    The BCI can be made more reliable by adding more number of electrodes to the brain sensor band so that more signals can be received from different parts of the brain and more number of tasks can be assigned and will also make the values more accurate for each and every action.

  6. CONCLUSION

    This project was carried out and all the outputs were obtained in real time. With this the blind person or the user will be capable of going through books or any other text source (standard format text), get to the color, object, person in front of him/her and get to know the date and time. And physically disabled will be able to control his/her wheelchair, lights and fans at the home and any other appliance connected and programmed.

  7. REFERENCES

      1. 7th International conference on science technology and management by Guru Gobind Singh Polytechnic, Nasik-25/feb/2017, ISBN:978- 93-86171-30-6 titled as A BRIEF STUDY OF BRAIN COMPUTER INTERFACE ADVANTAGES AND ITS DISADVANTAGES.

      2. The OrCam MyEye 2.0, by Prof. Amnon Shashua, the carroll centre 30/oct/2014

      3. Multiple face detection based on machine learning Hajar Filali; Jamal Riffi; Adnane Mohamed Mahraz; Hamid Tairi, 2018 International Conference on Intelligent Systems and Computer Vision (ISCV)

      4. Research on face detection based on fast Haar feature Shuang Wang; Guanyu Wen; Hua Cai, 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)

      5. Research on face recognition based on deep learning Xiao Han; Qingdong Du, 2018 Sixth International Conference on Digital Information, Networking, and Wireless Communications (DINWC)

      6. Object Detection in Real-Time Systems: Going Beyond Precision Anupam Sobti; Chetan Arora; M. Balakrishnan, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV)

      7. An object detection system based on YOLO in traffic scene; Jing Tao; Hongbo Wang; Xinyu Zhang; Xiaoyu Li; Huawei Yang, 2017 6th International Conference on Computer Science and Network Technology (ICCSNT)

      8. OCR based facilitator for the visually challenged Shalini Sonth; Jagadish S. Kallimani, 2017 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT)

      9. An Overview of the Tesseract OCR Engine;R. Smith, Ninth International Conference of Document Analysis and Recognition (ICDAR 2007)

      10. Real-time traffic light detection using color density Tai Huu-Phuong Tran; Cuong Cao Pham; Tien Phuoc Nguyen; Tin Trung Duong; Jae Wook Jeon, 2016 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia)

      11. Real-time face detection in color video,Szu-Hao Huang; Shang- Hong Lai, 10th International Multimedia Modelling Conference, 2004. Proceedings.

      12. Trained by demonstration humanoid robot controlled via a BCI system for telepresence Batyrkhan Saduanov; Tohid Alizadeh; Jinung An; Berdakh Abibullaev, 2018 6th International Conference on Brain-Computer Interface (BCI)

      13. Classification of EEG based BCI signals imagined hand closing and opening, Ebru Yavuz; Önder Aydemir, 2017 40th International Conference on Telecommunications and Signal Processing (TSP)

      14. Smart home: toward daily use of BCI-based systems Wafa Alrajhi; Dalal Alaloola; Amenah Albarqawi, 2017 International Conference on Informatics, Health & Technology (ICIHT)

      15. Identification of some brain waves signal and applications; Nguyen Thi; Hong Hanh; Huynh Van Tuan, 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA)

Leave a Reply