A Review Paper on Sign Language Recognition for The Deaf and Dumb

DOI : 10.17577/IJERTV10IS100129

Download Full-Text PDF Cite this Publication

Text Only Version

A Review Paper on Sign Language Recognition for The Deaf and Dumb

R Rumana1, Reddygari Sandhya Rani1, Mrs. R. Prema3

1B.E Graduate(IV year), Department of Computer Science and Engineering, SCSVMV, Kanchipuram 2B.E Graduate(IV year), Department of Computer Science and Engineering, SCSVMV, Kanchipuram 3Assistant Professor, Department of Computer Science and Engineering, SCSVMV, Kanchipuram

Abstract Hand gesture is one of the methods used in sign language for non-verbal communication. It is most commonly used by deaf & dumb people who have hearing or speech problems to communicate among themselves or with normal people. Various sign language systems had been developed by many makers around the world but they are neither flexible nor cost-effective for the end users. Hence, it is a software which presents a system prototype that is able to automatically recognize sign language to help deaf and dumb people to communicate more effectively with each other or normal people. Dumb people are usually deprived of normal communication with other people in the society, also normal people find it difficult to understand and communicate with them. These people have to rely on an interpreter or on some sort of visual communication. An interpreter wont be always available and visual communication is mostly difficult to understand. Sign Language is the primary means of communication in the deaf and dumb community. As a normal person is unaware of the grammar or meaning of various gestures that are part of a sign language, it is primarily limited to their families and/or deaf and dumb community.

Keywords: Hand gesture, Sign language, Communication

,OpenCV, ANN, CNN.

  1. INTRODUCTION

    Sign language is the mode of communication which uses visual ways like expressions, hand gestures, and body movements to convey meaning. Sign language is extremely helpful for people who face difficulty with hearing or speaking. Sign language recognition refers to the conversion of these gestures into words or alphabets of existing formally spoken languages. Thus, conversion of sign language into words by an algorithm or a model can help bridge the gap between people with hearing or speaking impairment and the rest of the world.

    Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context, we have sign language recognition, the communication method of deaf people.

    Hand gesture recognition for human computer interaction is an area of active research in computer vision and machine

    learning. One of its primary goals is to create systems, which can identify specific gestures and use them to convey information or to control a device. Though, gestures need to be modelled in the spatial and temporal domains, where a hand posture is the static structure of the hand and a gesture is the dynamic movement of the hand. There are basically two types of approaches for hand gesture recognition: vision-based approaches and data glove approaches. This work main focus is on creating a vision-based system able to do real-time sign language recognition. The reason for choosing a system based on vision relates to the fact that it provides a simpler and more intuitive way of communication between a human and a computer. Being hand-pose one of the most important communication tools in humans daily life, and with the continuous advances of image and video processing techniques, research on human-machine interaction through gesture recognition led to the use of such technology in a very broad range of applications, like touch screens, video game consoles, virtual reality, medical applications, and sign language recognition. Although sign language is the most natural way of exchanging information among deaf people it has been observed that they are facing difficulties with normal people interaction. Sign language consists of vocabulary of signs in exactly the same way as spoken language consists of a vocabulary of words. Sign languages are not standard and universal and the grammars differ from country to country.

  2. OBJECTIVES

    The Sign Language Recognition Prototype is a real-time vision- based system whose purpose is to recognize the American Sign Language given in the alphabet of Fig. 1. The purpose of the prototype was to test the validity of a vision-based system for sign language recognition and at the same time, test and select hand features that could be used with machine learning algorithms allowing their application in any real-time sign language recognition systems.

    The implemented solution uses only one camera, and is based on a set of assumptions, hereby defined:

    1. The user must be within a defined perimeter area, in front of the camera.

    2. The user must be within a defined distance range, due to camera limitations.

    3. Hand pose is defined with a bare hand and not occluded by other objects.

    4. The system must be used indoor, since the selected camera does not work well under sun light conditions.

      The proposed system architecture, which consists of two modules, namely: data acquisition, pre-processing and feature extraction and sign language gesture classification.

  3. LITERATURE SURVEY

    The researches done in this field are mostly done using a glove based system. In the glove based system, sensors such as potentiometer, accelerometers etc. are attached to each of the finger. Based on their readings the corresponding alphabet is displayed. Christopher Lee and Yangsheng Xu developed a glove-based gesture recognition system that was able to recognize 14 of the letters from the hand alphabet, learn new gestures and able to update the model of each gesture in the system in online mode. Over the years advanced glove devices have been designed such as the Sayre Glove, Dexterous Hand Master and Power Glove. The main problem faced by this gloved based system is that it has to be recalibrate every time whenever a new user on the finger-tips so that the fingertips are identified by the Image Processing unit. We are implementing our project by using Image Processing. The main advantage of our project is that it is not restricted to be used with black background. It can be used with any background. Also wearing of color bands is not required in our system. By that, securities could authorize an individuals identity depending on who she is, and not what she has and what she could remember. Two main classes can be found in biometrics:

      • · Physiological It is associated with the body shape, includes all physical traits, iris, palm print, facial features, Fingerprints, etc.

      • Behavioral Related to the behavioral characteristics of a person. A characteristic widely used till today is signatures. Modern methods of behavioral studies are emerging such as keystroke dynamics and voice analysis.

    Deaf Mute Communication Interpreter- A Review [1] : This paper aims to cover the various preailing methods of deaf-mute communication interpreter system. The two broad classification of the communication methodologies used by the deaf mute people are – Wearable Communication Device and Online Learning System. Under Wearable communication method, there are Glove based system, Keypad method and Handicom Touch-screen. All the above mentioned three sub- divided methods make use of various sensors, accelerometer, a suitable micro-controller, a text to speech conversion module, a keypad and a touch-screen. The need for an external device to interpret the message between a deaf mute and non-deaf-mute

    people can be overcome by the second method i.e., online learning system. The Online Learning System has different methods. The five subdivided methods are- SLIM module, TESSA, Wi-See Technology, SWI_PELE System and Web- Sign Technology.

    An Efficient Framework for Indian Sign Language Recognition Using Wavelet Transform [2]: The proposed ISLR system is considered as a pattern recognition technique that has two important modules: feature extraction and classification. The joint use of Discrete Wavelet Transform (DWT) based feature extraction and nearest neighbor classifier is used to recognize the sign language. The experimental results show that the proposed hand gesture recognition system achieves maximum 99.23% classification accuracy while using cosine distance classifier.

    Hand Gesture Recognition Using PCA in [3]: In this paper authors presented a scheme using a database driven hand gesture recognition based upon skin color model approach and thresholding approach along with an effective template matching with can be effectively used for human robotics applications and similar other application. Initially, hand region is segmented by applying skin color model in YCbCr color space. In the next stage thresholding is applied to separate foreground and background. Finally, template based matching technique is developed using Principal Component Analysis (PCA) for recognition.

    Hand Gesture Recognition System For the Dumb People [4]: Authors presented the static hand gesture recognition system using digital image processing. For hand gesture feature vector SIFT algorithm is used. The SIFT features have been computed at the edges which are invariant to scaling, rotation, addition of noise.

    An Automated System for Indian Sign Language Recognition in [ 5]: In this paper a method for automatic recognition of signs on the basis of shape-based features is presented. For segmentation of hand region from the images, Otsus thresholding algorithm is used, that chooses an optimal threshold to minimize the within-class variance of threshold black and white pixels. Features of segmented hand region are calculated using Hus invariant moments that are fed to Artificial Neural Network for classification. Performance of the system is evaluated on the basis of Accuracy, Sensitivity and Specificity.

    Hand Gesture Recognition for Sign Language Recognition: A Review in [6]: Authors presented various method of hand gesture and sign language recognition proposed in the past by various researchers. For deaf and dumb people, Sign language is the only way of communication. With the help of sign language, these physical impaired people express their emotions and thoughts to other.

    Design Issue and Proposed Implementation of Communication Aid for Deaf & Dumb People in [7]: In this paper author proposed a system to aid communication of deaf and dumb people communication using Indian sign language (ISL) with normal people where hand gestures will be converted into appropriate text message. Main objective is to design an algorithm to convert dynamic gesture to text at real time finally after testing is done the system will be implemented on android platform and will be available as an application for smart phone and tablet pc.

    Real Time Detection and Recognition of Indian and American Sign Language Using Sift In [ 8]:

    Author proposed a real time vision-based system for hand gesture recognition for human computer interaction in many applications. The system can recognize 35 different hand gestures given by Indian and American Sign Language or ISL and ASL at faster rate with virtuous accuracy. RGB-to-GRAY segmentation technique was used to minimize the chances of false detection. Authors proposed a method of improvised Scale Invariant Feature Transform (SIFT) and same was used to extract features. The system is model using MATLAB. To design and efficient user-friendly hand gesture recognition system, a GUI model has been implemented.

    A Review on Feature Extraction for Indian and American Sign Language in [9]: Paper presented the recent research and development of sign language based on manual communication and body language. Sign language recognition system typically elaborate three steps preprocessing, feature extraction and classification. Classification methods used for recognition are Neural Network (NN), Support Vector Machine (SVM), Hidden Markov Models (HMM), Scale Invariant Feature Transform (SIFT), etc.

    Sign Pro-an Application Suite for Deaf and Dumb. in [10]: Author presented application that helps the deaf and dumb person to communicate with the rest of the world using sign language. The key feature in this system is the real time gesture to text conversion. The processing steps include: gesture extraction, gesture matching and conversion to speech. Gesture extraction involves use of various image processing techniques such as ISSN No.: 2454- 2024 (online) International Journal of Technical Research & Science pg. 433 www.ijtrs.com www.ijtrs.org Paper Id: IJTRS-V2-I7-005 Volume 2 Issue VII, August 2017 @2017, IJTRS All Right Reserved histogram matching, bounding box computation, skin color segmentation and region growing. Techniques applicable for Gesture matching include feature point matching and correlation-based matching. The other features in the application include voicing out of text and text to gesture conversion.

    Offline Signature Verification Using Surf Feature Extraction and Neural Networks Approach [11]: In this paper, off-line signature recognition & verification using neural network is proposed, where the signature is captured and presented to the user in an image format.

    entries and input values at a given position. As we continue this process well create a 2-Dimensional activation matrix that gives the response of that matrix at every spatial position. That is, the network will learn filters that activate when they see some type of visual feature such as an edge of some orientation or a blotch of some colour.

  4. METHODOLOGY

    The system is a vision-based approach. All the signs are represented with bare hands and so it eliminates the problem of using any artificial devices for interaction.

    1. DATASET GENERATION

      It is required to make a proper database of the gestures of the sign language so that the images captured while communicating using this system can be compared. Steps we followed to create our data set are as follows. We used Open computer vision (OpenCV) library in order to produce our dataset. Firstly, we captured around 800 images of each of the symbol in ASL for training purposes and around 200 images per symbol for testing purpose. First, we capture each frame shown by the webcam of our machine. In each frame we define a region of interest (ROI) which is denoted by a blue bounded square as shown in the image below. From the whole image we extracted our ROI which is RGB and convert it into grey scale Image.

      Finally, we apply our gaussian blur filter to our image which helps us extracting various features of our image.

    2. GESTURE CLASSIFICATION

      The approach which we used for this project is

      Our approach uses two layers of algorithm to predict the final symbol of the user.

      Algorithm Layer 1:

      1. Apply gaussian blur filter and threshold to the frame taken with opencv to get the processed image after feature extraction

      2. This processed image is passed to the CNN model for prediction and if a letter is detected for more than 50 frames then the letter is printed and taken into consideration for forming the word.

      3. Space between the words are considered using the blank symbol.

        Algorithm Layer 2:

        We detect various sets of symbols which show similar results on getting detected. 2. We then classify between those sets using classifiers made for those sets only.

    3. TRAINING AND TESTING

      We convert our input images (RGB) into grayscale and apply gaussian blur to remove unnecessary noise. We apply adaptive threshold to extract our hand from the background and resize our images to 128 x 128. We feed the input images after pre- processing to our model for training and testing after applying all the operations mentioned above.

    4. CHALLENGES FACED

    There were many challenges faced by us during the project. The very first issue we faced was of dataset. We wanted to deal with raw images and that too square images as CNN in Keras as it was a lot more convenient working with only square images. We couldnt find any existing dataset for that hence we

    decided to make our own dataset. Second issue was to select a filter which we could apply on our images so that proper features of the images could be obtained and hence then we could provide that image as input for CNN model. We tried various filter including binary threshold, canny edge detection, gaussian blur etc. but finally we settled with gaussian blur filter. More issues were faced relating to the accuracy of the model we trained in earlier phases which we eventually improved by increasing the input image size and also by improving the dataset.

  5. CONCLUSION

    In this report, a functional real time vision based American sign language recognition for Deaf and Dumb people have been developed for asl alphabets. We achieved final accuracy of 92.0% on our dataset. We are able to improve our prediction after implementing two layers of algorithms in which we verify and predict symbols which are more similar to each other. This way we are able to detect almost all the symbols provided that they are shown properly, there is no noise in the background and lighting is adequate.

  6. REFERENCES

    1. Sunitha K. A, Anitha Saraswathi.P, Aarthi.M, Jayapriya. K, Lingam Sunny, Deaf Mute Communication Interpreter- A Review, International Journal of Applied Engineering Research, Volume 11, pp 290-296, 2016.

    2. Mathavan Suresh Anand, Nagarajan Mohan Kumar, Angappan Kumaresan, An Efficient Framework for Indian SignLanguage Recognition Using Wavelet Transform Circuits and Systems, Volume 7, pp 1874- 1883, 2016.

    3. Mandeep Kaur Ahuja, Amardeep Singh, Hand Gesture Recognition Using PCA, International Journal of Computer Science Engineering and Technology (IJCSET), Volume 5, Issue 7, pp. 267-27, July 2015.

    4. Sagar P.More, Prof. Abdul Sattar, Hand gesture recognition system for dumb people,

    5. International Journal of Science and Research (IJSR)

    6. Chandandeep Kaur, Nivit Gill, An Automated System for Indian Sign Language Recognition, International Journal of Advanced Research in Computer Science and Software Engineering.

    7. Pratibha Pandey, Vinay Jain, Hand Gesture Recognition for Sign Language Recognition: A Review, International Journal of Science, Engineering and Technology Research (IJSETR), Volume 4, Issue 3, March 2015.

    8. Nakul Nagpal,Dr. Arun Mitra.,Dr. Pankaj Agrawal, Design Issue and Proposed Implementation of Communication Aid for Deaf & Dumb People, International Journal on Recent and Innovation Trends in Computing and Communication ,Volume: 3 Issue: 5,pp- 147 149.

    9. S. Shirbhate1, Mr. Vedant D. Shinde2, Ms. Sanam A. Metkari3, Ms. Pooja U. Borkar4, Ms. Mayuri A. Khandge/Sign-Language- Recognition-System.2020 IRJET Vol3 March,2020.

Leave a Reply