Vision Based Hand Gesture Recognition for Indian Sign Language


Call for Papers Engineering Research Journal June 2019

Download Full-Text PDF Cite this Publication

Text Only Version

Vision Based Hand Gesture Recognition for Indian Sign Language

Ravikiran P

Dept. of Electronics and Communication Engineering, Vidyavardhaka College of Engineering,

Mysuru-02, India.

Reehan Ahmed

Dept. of Electronics and Communication Engineering, Vidyavardhaka College of Engineering,

Mysuru-02, India.

Jagadeesh B

Assistant Professor,

Dept. of Electronics and Communication Engineering, Vidyavardhaka College of Engineering,

Mysuru-02, India.

Abstract The task of gesture recognition is highly challenging due to complex background, presence of non-gesture hand motions, and different illumination environments. Gesture recognition is an area of current research in computer vision. Body language is one of the important ways of communication among the humans. Thus, gesture recognition system would be an ideal approach for improving human-machine interaction. This kind of human machine interfaces will allow a human to control a wide variety of devices remotely through hand gestures. The proposed method is a step towards developing a system with less complexity and high accuracy. This paper introduces a hand gesture recognition system to recognize dynamic gestures of which a single gesture is performed in complex background. Unlike previous gesture recognition systems, the proposed system neither uses instrumented glove nor any markers. The new barehanded proposed technique uses only 2D video input. Then the obtained motion information is been used in the recognition phase of the gesture.

Keywords Human Computer Interaction, Hand gesture recognition, Principal Component Analysis, Linear Discriminant Analysis.

  1. INTRODUCTION

    To facilitate efficient human computer interaction many special devices are being used as an interface between human and computer. But still, gestures are powerful means for the communication among the human. Many devices have been developed so that computer vision system would be able to understand gestures. The use of such devices became very familiar but still it bounds the speed and naturalness by which users can communicate with the computers. It became more serious after the evolution of different technologies for example Virtual reality. Many efforts have been carried out for the detection and recognition of faces, palm, emotional expression and hand gestures. Recognition systems plays very important role in many applications such as telemedicine,

    Sushmitha N S

    Dept. of Electronics and Communication Engineering, Vidyavardhaka College of Engineering,

    Mysuru-02, India.

    Rathan Kumar V

    Dept. of Electronics and Communication Engineering, Vidyavardhaka College of Engineering,

    Mysuru-02, India.

    Sahana M S

    Assistant Professor,

    Dept. of Electronics and Communication Engineering, Vidyavardhaka College of Engineering,

    Mysuru-02, India.

    biometrics and advanced interfaces for Human-Computer Interaction.

    Particularly, gesture is nothing but a form of communicative conversation which can be used to impart information among people. Gestures can differ from easy way to more complex way of using hand for verbalize feelings such as pointing an object to more complex one. But to get the meaning of gestures for being used in Human Computer Interaction its a big challenge. It requires some means by which gesture recognition process can become easy and efficient for understanding the intended gestures. Gesture recognition process requires features extraction based on which classifier classifies gesture with respect to their respective classes accurately.

    Gesture recognition is an area of current research in computer vision. Body language is one of the important ways of communication among the humans. Thus, gesture recognition system would be an ideal approach for improving human-machine interaction. This kind of human machine interfaces will allow a human to control a wide variety of devices remotely through hand gestures. The proposed method in this paper is a step towards developing a system with less complexity and high accuracy.

    Automatic gesture recognition has been an active research area in the last decade. The progress in this area can be found in review papers and proceedings of last four international conferences on gesture and gesture recognition. Among various approaches, techniques based on Principal Components Analysis (PCA), popularly called Eigen gestures have played a fundamental role in dimensionality reduction and demonstrated excellent performance. PCA based approaches typically include two phases: training and classification(recognition). In the training phase, a Eigen- space is established from the training samples using the principal components analysis method. The training gesture images are then mapped onto the Eigen-space. In the

    classification phase, the input gesture image is projected to the same Eigen-space and classified by an appropriate method. Many different methods have been used for gesture recognition, such as the Euclidean distance, Bayesian and Linear Discriminant Analysis (LDA). Unlike the PCA which encodes information in an orthogonal linear space, the LDA encodes discriminatory information in a linear separable space of which bases are not necessarily orthogonal. Researchers have demonstrated that the LDA based algorithms outperform the PCA algorithm for many different tasks.

    However, the standard LDA algorithm has difficulty processing high dimensional image data. PCA is often used for projecting an image into a lower dimensional space or so- called gesture space, and then LDA is performed to maximize the discriminatory power. In those approaches, PCA plays a role of dimensionality reduction and form a PCA subspace. The relevant information might be lost due to inappropriate choice of dimensionality in the PCA step. However, LDA can be used not only for classification, but also for dimensionality reduction. For example, the LDA has been widely used for dimensionality reduction in speech recognition. LDA algorithm offers many advantages in other pattern recognition tasks, and we would like to make use of these features with respect to gesture recognition as well.

  2. OVERVIEWOFTHEGESTURERECOGNITIO N SCHEME

    A low cost computer vision system for the gesture recognition that can be executed in a common PC equipped with USB web cam is one of the main objectives of the proposed work.

    Figure. 1: Proposed System Overview of the Hand Gesture Recognition Scheme.

    A low cost computer vision system that can be executed in a common PC equipped with USB web cam is one of the main objectives of our approach. The system should be able to work under different degrees of scene background complexity and illumination conditions. The real time image is taken through the web cam and the training sets of images are

    taken from the Marcel database. Then pre-processing of the real time image and the training set of image is done by skin detection. Then the PCA and LDA algorithm is applied to the training sets of images for compressing and analyzing the images, then the KNN classifications is used to classify the real time image with the correct match of the training set of image. The figure 1 shows the block diagram of the proposed Gesture Recognition System.

    III .PROPOSED METHODOLOGY

    1. PCA Algorithm:

      Principal component analysis (PCA) is a mathematical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values [20] [21] of linearly uncorrelated variables called principal componets. Figure 2 represents PCA algorithm.

      Figure 2: Block diagram of PCA algorithm

    2. LDA Algorithm:

      Linear Discriminant Analysis (LDA) and related Fisher's linear discriminant are the methods used in pattern recognition [22], statistics and machine learning for finding a linear combination of features which characterizes [23]or separates two or more classes of objects or events. Then the resulting combination can be used more commonly, for dimensionality reduction or a linear classifier. Figure 3 shows the block diagram of LDA algorithm.

    3. KNN Classification:

      The k-nearest neighbor algorithm is a classifying method which classifies an object where the majority of the neighbor belongs to. The choice of the number of neighbors is discretionary and up to the choice of the users. If k is 1 then it is classified [24] whichever class of neighbor is nearest.

      Typically the object is classified based on the labels of its k nearest neighbors by majority vote. If k=1, the object is classified as the class of the object nearest to it. When only

      two classes are present, it is said that k must be an odd integer. However, there can still be ties when k is an odd integer when performing multiclass classification. After converting each image to a vector of fixed-length with real numbers, Euclidean distance is calculated. Figure 4 represents KNN classification.

      Figure 3: Block diagram for LDA algorithm

      Figure 4:KNN Classification

      IV .RESULTS

      The system has been trained for three gestures namely: Concept, Exchange, and No. Figure 5,Figure 6 and Figure 7.

      1. Gesture Meaning : CONCEPT

        Figure 5: Gesture Meaning -CONCEPT

      2. Gesture Meaning : EXCHANGE

        Figure 6: Gesture Meaning -EXCHANGE

      3. Gesture Meaning : NO

Figure 7: Gesture Meaning -NO

  1. .APPLICATIONS

    Hand gesture recognition is been applied in different domains with different applications.

      • Hand gesture controlled robot for physically challenged.

      • Hand gesture controlled doors and vehicles.

      • Hand gesture controlled keyboard and mouse to interact with computer.

      • Gesture controlled appliances like air conditioner.

      • Sign Language Recognition: For the deaf and dumb people to communicate through the sign language.

      • Robot Control: Controlling the robot using gestures for example, one means move forward, five means to stop, and so on.

      • Television Control: Controlling the volume, changing the channels etc can be done for using the gesture recognition.

      • 3D Modeling: Building the 3D models by showing the models through the hand gestures.

  2. .ADVANTAGES

    1. The system successfullyrecognizedstatic and dynamicgestures. Couldbeappliedon a mobile robot control.

    2. Simple, fast, and easy to implement. Can beapplied on real system and playgames.

    3. Speed and sufficientreliable for recognition system.Good performance system with complex background.

    4. Training for humanis not required.

  3. .CONCLUSION AND FUTURE WORK

This project developed a system that can recognize real time image based on the features we extracted from the training database using Principal Component Analysis and Linear Discriminant Analysis algorithms. These were classified using K Nearest Neighbor.There are few reasons for poor performance of testing data.

In future, gesture recognition can be improvised for sentences, which leads to a better communication between normal people and dumb/deaf people.

REFERENCES

  1. J. J. Stephan and S. a. Khudayer, Gesture recognition for Human Computer Interaction, International Journal of Advancements in computing Technology, vol. 2, 4 November 2010.

  2. O. M. Foong, T. J. Low and S. Wibowo, "Hand Gesture Recognition: Sign to Voice System (S2V)," 2008

  3. P. Chakraborty, P. Sarawgi, A. Mehro, G. Agarwal and R. Pradhan, "Hand Gesture Recognition: A Comparative Study," in International MultiConference of Engineers and Computer Scientists, Hong Kong, 2008.

  4. T. H. H. Maung, "Real-Time Hand Tracking and Gesture Recognition System Using Neural Networks," 2009.

  5. A. Chaudhary, J. L. Raheja, K. Das and Sonia, "Intelligent Approaches to interact with Machines using Hand Gesture Recognition in Natural way: A Survey," International Journal of Computer Science & Engineering Survey (IJCSES), vol. 2, 2011.

  6. K. Symeonidis, "Hand Gesture Recognition Using Neural Networks"

  7. R. Lockton, A.W. Fitzgibbon, Real-time gesture recognition using deterministic boosting, Proceedings of British Machine Vision Conference (2002).

  8. G. R. S. Murthy, R. S. Jadon. (2009). A Review of Vision Based Hand Gestures Recognition, International Journal of Information Technology and Knowledge Management, vol. 2(2), pp. 405-410.

  9. P. Garg, N. Aggarwal and S. Sofat. (2009). Vision Based Hand Gesture Recognition, World Academy of Science, Engineering and Technology, Vol. 49, pp. 972-977.

  10. FakhreddineKarray, MiladAlemzadeh, JamilAbouSaleh, Mo Nours Arab, (2008) Human- Computer Interaction: Overview on State of the Art, International Journal on Smart Sensing and Intelligent Systems, Vol. 1(1).

  11. Garimakhurana ,Garimajoshi ,jatinderpalkaur , static hand gesture recognition system using shape based features, 2014

  12. Mokhtar M. Hasan, Pramoud K. Misra, (2011). Brightness Factor

    Matching For Gesture Recognition System Using Scaled Normalization, International Journal of Computer Science & Information Technology (IJCSIT), Vol. 3(2).

  13. Luigi Lamberti, Francesco Camastra, (2011). Real-Time Hand Gesture Recognition Using a Color Glove, Springer Proceedings of the 16th international conference on Image analysis and processing: Part I ICIAP.

  14. JongShill Lee, YoungJoos Lee, EungHyuk Lee, Seung Hong Hong Hand region extraction and Gesture recognition from video stream with complex background through entropy analysis Proceedings of the 26th annual International Conference of the IEEE EMBS San francisco, CA, USA, September 1- 5,2004.

  15. P. Garg, N. Aggarwal and S. Sofat. (2009). Vision Based Hand Gesture Recognition, World Academy of Science, Engineering and Technology, Vol. 49, pp. 972-977.

  16. Xingyan Li. (2003). Gesture Recognition Based on Fuzzy C- Means Clustering Algorithm,Department of Computer Science.

    The University of Tennessee Knoxville

  17. RayiYanu Tara, Paulus InsapSantosa, TeguhBharataAdj: Hand Segmentation from Depth Image using Anthropometric Approach in Natural Interface Development: International Journal of Scientific & Engineering Research, Volume 3, Issue 5, May-2012

  18. Herv̩Lahamy * and Derek D. Lichti, Towards Real-Time and Rotation РInvariant America Sign Language Alphabet Recognition Using a Range Camera, 29 October 2012.

  19. K. Kosmelj, J. Le-Rademacher and Lynne, "Symbolic Covariance Matrix for Interval-valued Variables and its Application to Principal Component Analysis: a Case Study," in Metodoloskizvezki, 2014.

  20. "A tutorial on PCA," [Online]. Available: http://www.sccg.sk/~haladova/principal_components .pdf

  21. "Fisher Linear Discriminant Analysis," [Online]. Available: http://www.ics.uci.edu/~welling/classnotes/papers_cl ass/Fisher- LDA.pdf.

  22. Wikipedia, "Linear discriminant analysis," [Online]. Available: http://en.wikipedia.org/w/index.php?title=Linear_dis criminant_analysis&oldid=153494000.

  23. J. Kim, B.-s. Kim and S. savarese, "Comparing Image Classification Methods: K-Nearest-Neighbor and Support-Vector-Machines".

  24. P. K. K. Vyas, A. Pareek and . S. Tiwari, "Gesture Recognition and Control Part 2 Hand Gesture Recognition (HGR) System & Latest Upcoming Techniques," International Journal on Recent and Innovation Trends in Computing and Communication, vol. 1, no. 8.

  25. Chen-Chiung Hsieh and David lee, A real time hand gesture recognition system by adaptive skin-color detection and motion history image, Dept of CSE, Tatung University, Taipei, Taiwan, ICSPS, dalian, 2010

Leave a Reply

Your email address will not be published. Required fields are marked *