Dumb Aid Mobile Communication System with Hand Gesture Recognition using Neural Network

DOI : 10.17577/IJERTCONV3IS19165

Download Full-Text PDF Cite this Publication

Text Only Version

Dumb Aid Mobile Communication System with Hand Gesture Recognition using Neural Network

Riya Vincent,

Student(M. tech),

Department of Electronics and Communication,

T. John Institute of Technology, Bangalore, karnataka, India,

Mrs. Naganibha A. S.,

Assistant Professor,

Department of Electronics and communication,

  1. John Institute of Technology, Bangalore, Karnataka, India.

    Athul Anand T. M., Project Manager-Embedded, Arvin Technologies, Kalamassery,Kerala,India.

    AbstractDumb aid mobile communication system with voice transmission is a complex electronic device that can help to provide power of speak to a dumb person. With this device a normal man can easily communicate with a dumb person with a hand held device. The proposed system is a transmitter and a receiver coupled device in which one end will have a normal person and a dumb man in the other end.This system is a process of recognizing hand gestures captured using video camera and a standard consumer personal computer, developed and implemented using the MATLAB mathematical environment. A pattern recognition system will be using a transform that converts an image in to a feature vector, which will then be compared with feature vectors of a training set of gestures. Computer recognition of hand gestures may provide a more natural-computer interface, allowing people to point, or rotate a CAD model by their hands. One of the most structured set of gestures belongs to sing language. In sign language, each gesture has an assigned meaning(or meanings)or corresponding voice produced using MATLAB algorithms .The gesture voice is transmitted via DTMF coder/or an application MODEM. A default number will be configured; in addition with manual dialing using a graphical user interface designed using MATLAB. The receiver side person can hear voice of dumb person through this system. This system is going to implemented using Raspberry Pi, a capable little computer. This system also provides normal person make call to communicate to dumb person. Hence two way communication is possible.

    Keywords—Neural network ,SAD,GPU.

    1. INTRODUCTION

      One can create life but no one has the rights to destroy it. The saying goes like this. Though some humans are physically challenged, but this doesnt mean they have to be avoiding of all worldly pleasures, fun, etc. Here in our project we are going to take up this issue to build a mobile communication system which can interpret the sign language by dumb people into speech output. It has the capability of capturing human hand signals and produce speech (voice) output accordingly. Sign language is an expressive and natural way for communication between normal and dumb people (informations majorly conveys through the hand gesture).The intension of the sign language translation system is to translate

      normal sign language into speech and to make easy communication with the dumb people. In order to improve the life style of the dumb people the proposed system is developed. Sign language uses both physical and non-physical communication. The physical gesture communication consists of hand gesture that convey respective meaning, the non- physical is head movements ,facial appearance body language and this is different from country to country in orientation and position. Sign language is not universal. America developed American sign language, Britain developed British sign language system and Thailand developed Thai sign language system Different approaches are image processing and data glove. The image processing techniques uses a camera to capture the image/video and analyses the data with static images and recognize the image using algorithms and produce voice signals as output. Vision based sign language recognition system mainly follows the algorithms are hidden markov mode ,Artificial neural Networks(ANN) and Sum of Absolute Difference(SAD) Algorithm use to extract image and eliminate the unwanted background noise. Current techniques using super gloves for hand gesture image capturing have disadvantages. It requires a specially designed glove which is an additional hardware. Moreover, there is a delay between input and output too. In order to overcome such drawbacks a device based on neural network is build. This will have the features of efficient generalization ability, tolerance to input noise by hand color detection, reduced delay

      ,parallel processing ,and mathematical modeling is not required.

    2. HAND GESTURE RECOGNITION USING NEURAL NETWORK

      Hand gesture recognition is not limited to a paper or digital surfaces. It is also extended to the third dimension. Several researches and attentions are given to this topic because of the difficulties in computational capabilities, learning algorithms and camera performance. Rapid improvements are achieved over the past few years. Complications are made from the static gesture detection. Gesture detection is the combination of different figure states, angles of fingers and its orientations[1].

      1. Neural network

        Here gestures are recognized by using neural network

        .This is an information processing paradigm, inspired by the biological nervous system. For example, it works like how a human brain processes the information. The key element is the novel structure of the information processing system. It solves a specific problem by working large number of highly interconnected processing element together. Like people, it learn from examples. The remarkable advantage is that it can detect gestures effectively than humans or by other computer techniques even if it is complex trained neural network can be called as an expert in the category of information it has been given to analyze. It solves the problem by itself and this cause the problem of unpredictability which is one of the disadvantage .

      2. Hand colour detection

        One of the major problems in hand gesture recognition is the background noise. If the dumb person is in a public place such as airport, railway station etc. the background noise will also get converted to voice output. This will create difficulties in the communication. To avoid this, skin colour detection can be used. The skin colour of the dumb person can be extracted from the image taken by camera. By doing this accurate result can be produced. Another approach involves finding the boundary contours of the hand and it is robust in scale, translation and rotation, yet it is extremely demanding computationally. In a multi-system camera is used to pick the center of gravity of the hand and points with maximal distances from the center provide the locations of the finger tips, which are then used to obtain a skeleton image, and finally for gesture recognition for particle filters

      3. Existing system

        Many technological products were developed to comfort and ease the lives of the dumb people. In the case of glove based technique it requires a specially designed glove to be worn by the user. The gestures are recognized only if the glove is used. This becomes an additional hardware requirement and sometimes adds to the discomfort of those people, moreover there is a delay between input and output too . In order to overcome such drawbacks, we have in here a solution using new system to overcome all the short coming of previously proposed system.

      4. Proposed system

        A dumb aid mobile communication system is to be designed for providing better communication among dumb and normal person. The design is modeled and simulated using MATLAB and implemented using Raspberry Pi. Raspberry Pi is a credit card sied computer which can do many of the things that your desktop PC does. It can also play high definition video. This contains an ARM1176JZFS, with floating point, running at 700Mhz;and a video core 4 GPU. The GPU is capable of BluRay quality playback, using H.264 at 40MBits/s. It has a fast 3D core accessed using the supplied OpenGL ES2.0 and OpenVG libraries. The performance of the operation of the proposed system should

        satisfy the theoretical specifications and can be verified by the simulation results.

      5. Design Implementation

        Phase I: Image capturing

        When the user gives the command by using his hand, the gestures are captured by a video camera. This shall be developed in MATLAB.

        Phase II : Image processing

        A pattern recognition system will be using a transform that converts an image into a feature vector, which will then be compared with the feature vectors of a training set of gestures.

        Phase III: The Gesture Recognition

        Gestures are recognized by using neural network. Artificial neural network are generally presented as system of interconnected neurons which can compute values from inputs and are capable of pattern recognition[2] .This is developed in MATLAB environment.

        Phase IV: Voice Transmission

        The gesture voice is transmitted via DTMF coder/or an application MODEM. A default number will be configured; in addition with manual dialing using a graphical user interface designed using MATLAB. The receiver side person can hear voice of dumb person through this system.

        Phase V : Implementation

        The system shall implement in Raspberry Pi using openCV language.

    3. APPLICATION

        • Provides a platform for Interaction between a dumb and a normal person who doesnt know the sign language.

        • Applicable in media for sign language translation.

    4. FUTURE SCOPE

        • Dumb aid telephone system with video conferencing facility

        • Use of same device by multiple people.

        • The recognition of moving gestures can be resolved using accelerometer sensor at wrist for full capture of the wrist movement changes

    5. REFERENCES

  1. Meenakshi Panwar, Hand Gesture based Interface for Aiding Visually Impaired, Centre for Development of Advanced Computing Noida, Uttar Pradesh, India, 2012 International Conference on Recent Advances in Computing and Software Systems.

  2. simulation of real time hand gesture recognition for physically impaired. international journal of advanced research in computer and communication engineering vol. 2, issue 11, november 2013.

  3. Higgins, E.L., & Zvi, J.C. (1995). Assistive technology for postsecondary students with learning disabilities: From research to practice. Annals of Dyslexia, 45: 123-143.

  4. Wetzel, K. (1996). Speech-recognizing computers: A written communication tool for students with learning disabilities. Journal of Learning Disabilities, 29(4): 371-380.

  5. The Microsoft Projects on Speech Processing Retrieved on 18th February, 2012from http/microsoft.com/enus/ groups/srg

[6]. Dragon Naturally Speaking, Helping All Students Reach Their Full Potential, A White Paper for the Education Industry from Nuance communication March 2009

Leave a Reply