Hand Gesture Recognition System to Control Soft Front Panels

DOI : 10.17577/IJERTV3IS120004

Download Full-Text PDF Cite this Publication

Text Only Version

Hand Gesture Recognition System to Control Soft Front Panels

H. Renuka

Research scholar

Electronics and Instrumentation Engineering VNR Vignana Jyothi College of Engineering and Technology

Hyderabad, India

B. Goutam

Assistant Professor

Electronics and Instrumentation Engineering VNR Vignana Jyothi College of Engineering and Technology

Hyderabad, India

Abstract Now a days Computers and computerized devices have become an eminent element of human society. They increasingly influence many aspects of human lives; for example, the way of communication, the way of performs action. Thus a new concept of interaction emerged, Human Computer Interaction (HCI). Although computers have made tremendous advancements, the common HCI still relies on input devices such as keyboard, mouse, and joysticks. With the increase in interaction of computers in daily life, it would be worthy enough to get a Perceptual User Interface (PUI) to interact with computers as human interact with each other. Vision-based gesture recognition is an important technology for friendly human computer interface, and has received more and more attention in recent years.

The proposed system is a dynamic hand gesture recognition system, in which the hand moments are given as input to the computer to reduce limitation in the existing systems. This dynamic hand gesture recognition system is used to control the media player operations like play, pause, etc here to design the dynamic hand gesture recognition system for controlling the media player Lab VIEW software is used

KeywordsHand Gesture Recognition system, Human Computer Interaction System, Soft Front panels, Image Acquisition.

  1. INTRODUCTION

    With the advent of computers and computerization the human machine interfaces have reached new levels and with them brought a new range of consumer electronics. In general all these user interfaces interact either with the keyboard, mouse, remote or joystick. A static control panel limits the mobility of the user like; a remote can be misplaced, dropped or broken. Also the physical presence of the user is needed at the site of operation in a physical Human Machine Interface. This project aims to replace this physical Human Machine Interface with a Machine Vision based application.

    In this paper we have discussed a low cost dynamic Hand Gesture recognition technique to control various soft front panels like HMI systems, Robotic Systems, Telecommunication Systems, Wheelchair applications for handicapped people and demonstrated live example of how to apply the same and control soft front panels. This work aims to describe a Hand Gesture Recognition technique using Virtual Instrumentation LabVIEW and Image Acquisition software. In Section II we describe the various areas in which Hand Gesture Recognition Systems can be imbibed for a better performance. In section III we describe the Literature Survey and define soft front panels. In section IV we describe

    about the various blocks involved in the project and the basic block diagram. In section V we describe about the architecture and implementation of the project. In section VI we describe results and conclude the paper. This project has been implemented using LabVIEW software.

  2. APPLICATIONS OF HAND GESTURE RECOGNITION

    A Hand Recognition System recognizes the movements of the hand to implement certain commands or functions. Gestures are the non-verbally exchanged information. A person can perform innumerable gestures at a time. Since human gestures are perceived through vision, it is a subject of great interest for computer vision researchers.

    1. Telepresence :

      In situations like failure or emergency hostile conditions or inaccessible remote areas it is often impossible for human operators to be physically present near the machines. Tele presence is that area of technical intelligence which aims to provide physical operation support that maps the operator arm to the robotic arm to carry out the necessary task. The prospects of tele presence includes space, undersea mission, medicine manufacturing and in maintenance of nuclear power reactors.

    2. Bomb disposal:

      During bomb disposal the presence of humans can be replaced by robots which work on the hand gesture recognition. This reduces the risk of human life and also encourages effective handling of the situation. The robot recognizes the gestures done by a human from a faraway remote place and implements the corresponding function.

    3. Wheelchair based applications:

    People in wheelchair often face problems with the manual techniques used to control the movements of the chair. Hand gestures can be adopted so that each gesture would be assigned to a particular movement. When a particular gesture is given the wheelchair moves in a corresponding direction.

    There are many applications of hand gesture recognition which are already in use like the gaming industry adapts a lot of hand gesture recognition techniques in which the use of joysticks or keyboards is totally unnecessary nowadays. Robotic surgeries are also conducted where the doctors operate the patients from remote location and the robots perform the tasks.

  3. LITERATURE SURVEY

    Christopher Lee and Yang Shen Xu [1] developed the cyber glove method of interfacing the robot which performs by recognizing sign language based on hidden Markov models. Nishikawa and Ohnisi [2] have developed a hand recognition system based on the rate of change of human gestures. Starner and Pentland [3] developed a glove-environment system capable of recognizing 40 Sign Languages with a rate of 5 kHz. Research states that by introducing gestures the performance of the system upped by 90% and the error due to various interferences is reduced.

    In our project we have used hand gestures to control soft front panels. The conventional electronic instruments are hard wired to develop particular configuration which performs specific task only. The front panel is the important part of the electronic instrument as it provides the user interface such that operator can easily interact with the instrument and control it. To control such traditional instruments different knobs, buttons, dials, switches and displays are provided on the front panel. Due to advancement in the overall technology, the electronic instruments nowadays provide improved virtual front panel standard stand-alone instrument connected to a computer.

    Fig: 1 Block schematic of soft front panel

    Because of the graphical user interfaces the functionality of the front panel of the traditional instrument increases. These are called soft front panels.

  4. PROPOSED SYSTEM

    There are numerous methods for implementing a Hand gesture System. Two methods have been considered for the theoretical perception.

    1. One is to build a three-dimensional model of the human hand. The model is matched to images of the hand by one or more cameras, and parameters corresponding to palm orientation and joint angles are estimated. These parameters are then used to perform gesture classification.

    2. Second one is to capture the image using a camera then extract some feature and those features are used as input for classification and control.

      In this project we have used second method for modeling the system. In hand gesture recognition system we have taken database from standard hand gesture database, Segmentation and filtering techniques are applied on images in preprocessing phase then using detection method we will

      obtain our prme feature and use it to classify a command. We have used linear classifiers. The basic block diagram of the Hand Gesture Recognition System is as follows:

      Fig : 2 Block Diagram of hand gesture recognition system

      The main components are power supply, web camera and the personal computer or laptop. The power supply connection is given to the PC. The camera is connected to PC through USB cable. The camera capture images of hand moments continuously and they are given as input to the PC. These images are taken as gestures to control the soft front panels. The system first creates a template as reference. The reference template matches with the camera acquired images. According to the number of patterns matched with the reference template, its corresponding operations are performed in soft front panel.

      Fig 3 Various Blocks of hand gesture recognition system

      The technique that is used to recognize hand gesture is bases on computer vision. The overall system architecture is shown in figure above. The whole system of hand gesture recognition divided into four phases: Image Acquisition, Image Pre-processing, Feature Extraction and Hand Gesture Recognition.

      Reading the video, Frame extraction and Pre-processing comes under the first stage i.e. video acquisition module. The initiation of the acquisition is done manually. A camera sensor is needed in order to capture the features/web cam. Local changes such as noise and digitized errors should not change the image scenes and informations. In order to satisfy the memory requirements and the environmental scene conditions, pre-processing of the raw video content is highly important. Various factors like illumination, background, camera parameters, and viewpoint of camera add complexity to the system. These conditions adversely affect images dramatically. The first most step of pre-processing block is filtering. It is used to remove the unwanted noise from the image scenes. Currently, the vision-based analysis is used

      mostly, which deals with the way human beings perceive information about their surroundings. The database for these vision-based systems is created by selecting the gestures with predefined meaning, and multiple samples of each gesture are considered to increase the accuracy of the system.

      In this project, we have used the vision-based approach for our gesture recognition application. Several approaches have been proposed previously to recognize the gestures using soft computing approaches such as artificial neural networks and genetic algorithms. ANNs are the adaptive self-organizing technologies that solved a broad range of problems such as identification and control, game playing and decision making, pattern recognition medical diagnosis, financial applications, and data mining in an easy and convenient manner. In second stage scaling and orientation invariant feature extraction method has been introduced to extract the feature of the input image based on moment feature extraction method. Finally, tracking system is used to recognize the hand gestures.

      • Database description

    In this project all operations are performed on color image. We have taken hand gesture database consisting of 5 hand gestures. Gesture is produced as it is a static gesture. The system works offline recognition i.e. we give test image as input to the system and system tells us which gesture image we have given as input. The system is purely data dependent.

    Fig :4 Samples of Images from database

    We take color image here to avoid segmentation problem. A uniform background is placed behind the performer to cover all of the workspace. A low-cost 2 Mega Pixel camera is used to capture the hand gesture performed by performer. It produces 8-bit images. The resolution of grabbed image is a minimum of 640×480. Each of the gestures/signs is performed in front of a constant background and the user's fingernails are taken as reference. Each gesture is performed at various scales, translations, and a rotation in the plane parallel to the image-plane. There are total 200 images, 40 images per gesture.

  5. ARCHITECTURAL IMPLMENTATION

    The following is the architectural implementation of the project.

    Fig : 5 flow chart of Various Blocks of hand gesture recognition system

    1. Image acquisition:

      According to the law of energy conservation, a part of the radiant energy that arrives to the object will absorbed by this, other is refracted and another was reflected in form of radiation. The process can explained in equation

      (1)

      Where represents the incident light on the object, the absorbed energy by the material of the object

      , the refracted flux the reflected energy, all of them define the material properties at a given wave length ()

      . Using this principle digital image of the hand gestures are captured using a web camera. We have used the color images directly in the project to avoid segmentation problem. First the sample images are captured and stored. Then later on the images are captured continuously to cross reference the images and implement the program.

    2. Pre-processing

    We have taken prima database which is standard database in gesture recognition. We have taken total 5 signs each sign with 40 images. Preprocessing is applied to images before we can extract features from hand images. It consists of two steps

    a. Segmentation b. Morphological filtering

    Using segmentation gray scale image is converted into binary image so that we have only two objects in image one is hand and other is background. After converting gray scale image into binary image we make sure that there is no noise in image so we use morphological filter technique.

    Morphological techniques consist of four operations: dilation, erosion, opening and closing.

    1. Feature extraction:

      Features extracted are carefully chosen it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full size input. Feature extraction involves reducing the amount of resources required to describe a large set of data. The pattern matching and recognition form a part of feature extraction process.

      The algorithm not only searches the exact apparition of the image but also finds a certain grade of variation respect to the pattern. The simplest method is the template matching and it expressed in the following way:

      An image A (size (WxH) and image P (size wxh), is the result of the image M (size (W-w+1) x (H-h+1)), where each pixel M(x, y) indicates the probability that the rectangle [x, y]- [x+w-1, y+h-1] of A contains the Pattern.

      The image M defined by the difference function between the segments of the image is given by the equation (2)

      commands as follows: each individualistic path can be assigned with a different function.

      If the number of matches is 1:

      If the number of matches is 2:

      If the number of matches is 3:

      (2)

      A continuous acquisition from the webcam with inline processing used, therefore there is a while loop inside the blocks of video acquisition and pattern recognition system. One of the methodologies to change RGB values to intensity values is through IMAQ extract single color plane located in vision utilities/color utilities. Once the image is converted into intensity, this image will be the input for the pattern recognition function.

      If the number of matches is 4:

      If the number of matches is 5:

      When the number of matches is equal to the image file path chosen then the corresponding command or function is executed.

  6. RESULTS

    We have used the following technique to demonstrate the Hand gesture recognition system to control soft front panel of a windows player. The no of patterns matched has displayed on the front panel.

    Fig:6 Selection of Pattern matching

    1. Interfacing wth the block diagram

    After processing the image the template of the processed image is interfaced to the block diagram through Active X controls. A file path is created and is given to the program so that active x activates the path in which the image file has been rooted. The template file path is used as a control, it gives the path of the pattern matching template which used in vision assistant. Giving the same address in the select file path the image can be extracted and sent for further processing into the case structure. In the case structure each match that is found in the path chosen is given and imprinted with a command. Whenever that particular match pops up the following command that is assigned to it is implemented. The case structure used in our projects portrays some of the

    No of patterns matched

    Operation

    1

    Play

    2

    Pause

    3

    Backward

    4

    Forward

    5

    Stop

    1 ( moving in forward

    direction)

    Volume

    increase

    1 ( moving in backward

    direction)

    Volume

    decrease

    Table II: Controlling operations of soft front panel

    Number of matches is one so according to table II the media player file will play which programmed as default operation in coding.

    A case structure is designed where each finger is given a control of each feature in Windows Media Player like one finger denotes that the video has to start playing, 2 fingers denote that the video has to pause 3 fingers denote the video has to fast forward, 4 fingers denote that the video has to play again and 5 fingers denote that the video has to stop, one finger going right denotes the increase in volume and going left denotes decrease in volume. This case structure is interfaced with the pattern recognition phase and each command is obeyed only when the program recognizes and matches each gesture made with the control that has to be performed on the video. The data is captured with the help of 2 different people and five video samples were collected for each person. We have divided 20% of our data to train the system, 30% to validate the system and 50% to test the system.

    Fig7: soft front panel 1

    Number of matches is two so according to table II the front panel will pause for the short time until number of patterns matched is two.

    Fig8: soft front panel 2

    Number of patterns matched is three so the corresponding controlling operation is backward, where the media player file is move to some track behind from the current position

    Fig.9: soft front panel 3

    Number of patterns matched is four so the corresponding controlling operation is forward, where the media player file is moved to some tracks ahead from the current position.

    Fig.10: soft front panel 4

    The volume controlling operations includes two things one is no of patterns matched and another is moment of the matched pattern. The following is the front panel block diagram of the project.

  7. CONCLUSION

As to the computer vision algorithms there is ongoing work to increase the speed and performance of the system, to acquire more position independence for recognition of gestures, to increase the tolerance for varying lighting conditions, and to increase recognition performance with complex backgrounds. The main effort, however, is currently aimed at the design and organization of menus.

Area of Hand gesture based computer human interaction is very vast. This project recognizes hand gesture off-line so work can be done to do it for real time purpose. Hand recognition system can be useful in many fields like robotics, computer human interaction and so make this offline system for real time will be future work to do. Support Vector Machine can be modified for reduction of complexity. Reduction of complexity leads us to a less computation time. Reduced complexity provides us less computation time so we can make system to work real time.

As day-to-day technology, increase the hand gestures system replaced by the voice commands. This voice commands uses only microphones to control the operations.

REFERENCES

  1. Henrik Birk and Thomas Baltzer Moeslund, Recognizing Gestures From the Hand Alphabet Using Principal

  2. Component Analysis, Masters Thesis, Laboratory of Image Analysis, Aalborg University, Denmark, 1996.

  3. Andrew Wilson and Aaron Bobick, Learning visual behavior for gesture analysis, In Proceedings of the IEEE Symposium on Computer Vision, Coral Gables, Florida, pp. 19-21, November 1995.

  4. Thad Starner and Alex Pentland, Real-time American sign language recognition from video using hidden markov models,Technical Report No. 375, M.I.T Media Laboratory Perceptual Computing Section, 1995.

  5. Jennifer Schlenzig, Edward Hunter, and Ramesh Jain, Recursive spatio-temporal analysis: Understanding

  6. Gestures, Technical report, Visual Computing Laboratory, University of San Diego, California, 1995.

  7. Arun Katkere, Edward Hunter, Don Kuramura, Jennifer Schlenzig, Saied Moezzi, and Ramesh Jain, Robogest: Telepresence using hand gestures,Technical report, University of California, San Diego, Visual Computing Laboratory, Technical Report No. VCL-94-104, December 1994.

  8. Hank Grant, Chuen-Ki Lai, simulation modeling with artificial reality technology (smart): an integration ofvirtual reality simulation modeling , Proceedings of the Winter Simulation Conference, 1998.

  9. Theodore Brun. Teckensprks Lexikon. Bokforlaget Spektra AB, Halmstad, 1974 http://www.happinesspages.com/baby- sign-language-FAQ.html

  10. Christopher Lee and Yangsheng Xu, Online, interactive learning of gestures for human robot interfaces Carnegie Mellon University, The Robotics Institute, Pittsburgh, Pennsylvania, USA, 1996

Leave a Reply