- Open Access
- Total Downloads : 573
- Authors : Prof. Shabnam S. Shaikh, Sandeep L. Dhebe, Pallavi D. Zambare, Abhishek D. Jivanwal, Pratiksha P. Luniya
- Paper ID : IJERTV3IS20313
- Volume & Issue : Volume 03, Issue 02 (February 2014)
- Published (First Online): 18-02-2014
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Human Computer Interaction (Robot Handling) using Hand Gesture Recognition
Prof. Shabnam S. Shaikh 1, Sandeep L. Dhebe 2, Pallavi D. Zambare 3, Abhishek D. Jivanwal 4, Pratiksha P. Luniya 5
1, 2,3,4,5. Department Of Computer Engineering, Aissms College Of Engineering,
Pune India.
Abstract Input of information is becoming a challenging task as portable electronic devices become smaller and smaller. Alternatives to the keyboard/mouse are necessary on hand-held computers. The next logical step would be to remove the need for any (dedicated or merged) input space, as well as the need for any additional input device (stylus, mouse, etc.). This would allow inputting data by just executing bare-handed gestures in front of a web camera. This technique let people can control these products more naturally, intuitively and conveniently. In this paper, a fast gesture recognition scheme is proposed to be an interface for the human-machine interaction (HMI) of systems. This paper presents some low-complexity algorithms and gestures to reduce the gesture recognition complexity and be more suitable for controlling real-time computer systems. The primary goal is to create a system that can identify human generated gestures and use this information for device control.
Keywords Gesture Recognition, Human Machine Interaction (HMI), Real Time Computer Systems, Device Control, Hand-held Computers, webcamera, Low Complexity Algorithm.
-
INTRODUCTION
Recently strong efforts have been carried out to develop intelligent and natural interfaces between users and computer systems based on human gestures. Gestures provide an intuitive interface to both human and computer. Gesture is a natural form of communication which enhances the quality of communication. Thus such gesture based interfaces can not only substitute the common interface devices, but also can be exploited to extend their functionality. Gestures are expressive, meaningful body motions involving physical movements of the fingers, hands, arms, head, face, or body with the intent of conveying information or interacting with the environment.
The aim of this paper is to bridge this gap, bringing intangible, digital information out into the tangible world, and allowing us to interact with this information via natural hand gestures. This system frees information from its confines by seamlessly integrating it with reality, and thus making the entire world your computer. This paper aims in implementing real time gesture recognition. The primary goal of the paper is
to create a system that can identify human generated gestures and use this information for software and hardware control.
The recent trend in this technology is Service Robots. Service robots assist common man in day to day task and can be used almost anywhere. So if a service robot has to be handled by a non-expert user we need to make the instructions as simple as possible which can be easily achieved using gestures.
System provides a more natural Human Computer Interface. Provide the user who does not have whole knowledge of the system, physically challenged users a better way to interact with the computers. Interfacing user with software. User can give input without using keyboard, mouse, etc. i.e. to give input in the form of different body gestures. Helpful for handicapped people. Actually finds out the directional gesture vector. Direction as well as count both will be responsible for the hardware control.
-
RELATED WORKS
Most gesture recognition methods usually contain three major stages [5] (see Figure 1). The first stage is the object detection. The target of this stage is to detect hand objects. Many environment and image problems are needed to solve at this stage to ensure that the hand contours or regions can be extracted precisely to enhance the recognition accuracy. Common image problems contain unstable brightness, noise, poor resolution and contrast. The better environment and camera devices can effectively improve these problems. However, it is hard to control when the gesture recognition system is working in the real environment or is become a product. Hence, the image processing method is a better solution to solve these image problems to construct an adaptive and robust gesture recognition system. The second stage is object recognition. The detected hand objects are recognized to identify the gestures. At this stage, differentiated features and effective classifiers selection are a major issue in most researches. The third stage is to analyse sequential gestures to identify users instructs or behaviours.
Early approaches to the hand gesture recognition problem in a robot control context involved the use of markers on the finger tips. An associated algorithm is used to detect the
presence and color of the markers, through which one can identify which fingers are active in the gesture.
Ching-Hao Lai [1] proposed a paper to guarantee to gain a well performance of the proposed scheme for in-vehicle computers, this paper supposes some environment conditions:
-
the proposed scheme is used to construct a gesture recognition system for an indoor or in-vehicle environment;
-
the camera captures hand video from a looking down view;
-
the background in each hand image is not very complex
Fig.1. Gesture Recognition System
IV. EASE OF USE
and is darker than hands. Based on these predefined environment conditions, the computing complexity and recognition accuracy of this gesture recognition scheme can be effectively improved, and this scheme can be used to construct a more smooth human based user interface (UI) for the systems of vehicles or other consumer electronic products.
Kevin R. Wheeler [9] introduced an approach of designing and using neuroelectric interfaces for controlling virtual devices. Hand gestures are used to interface with a computer instead of manipulating mechanical devices such as joysticks and keyboards. Electromyographic signals (EMG) are noninvasively sensed from the muscles used to perform these gestures. These signals are then interpreted and translated into useful computer commands.
Schuldt et al. [10] used SVM to classify local motion description of hands. Scovanner et al. This method first detects each corner point and then analyses the features extracted from the neighbouring pixels of these corner points. These features usually describe the changes of brightness in the gray- scale space and are used to recognize by SVM.
We demonstrate the effectiveness of the technique on real imagery. Vision-based automatic hand gesture recognition has been a very active research topic in recent years with
motivating applications such as human computer interaction (HCI), robot control, and sign language interpretation. The general problem is quite challenging due to a number of issues including the complicated nature of static and dynamic hand gestures, complex backgrounds, and occlusions. Attacking the problem in its generality requires elaborate algorithms requiring intensive computer resources. What motivates us for this work is a robot navigation problem, in which we are interested in controlling a robot by hand pose signs given by a human. Due to real-time operational requirements, we are interested in a computationally efficient algorithm.
-
-
WORKING OF PROPOSED SYSTEM
-
Process of Gesture Recognition
Fig.2. Application interface and skin-colour Learning square.
Fig.3. Block diagram of HMI System
Fig.5. Gesture alphabet nd valid gesture transition
Fig.4. Block diagram of Hand Gesture Recognition
The hand must be located in the image and segmented from the background before recognition. Colour is the selected cue because of its computational simplicity, its invariant properties regarding to the hand shape configurations and due to the human skin-colour characteristic values. Also, the assumption that colour can be used as a cue to detect faces and hands has been proved useful in several publications. For our application, the hand segmentation has been carried out using a low computational cost method that performs well in real time. The method is based on a probabilistic model of the skin-colour pixels distribution. Then, it is necessary to model the skin-colour of the users hand. The user places part of his hand in a learning square as shown in Fig. 2. The pixels restricted in this area will be used for model learning. Next, the selected pixels are transformed from the RGB-space to the HSV-space and the chroma information is taken: hue and saturation.after this the image is blurred so that we can get the consistent image of selected pixels, i.e. the selected pixels are visible only in the area where we want i.e. in the palm side only. Other areas where pixels are presnt are made invisible. So unwanted pixels get removed. After that the Trignometric circular scan is performed so that the proper gesture is to be recognized.
Our gesture alphabet consists in four hand gestures and four hand directions in order to fulfill the applications requirements. The hand gestures correspond to a fully opened hand (with separated fingers), an opened hand with fingers together, a fist and the last gesture appears when the hand is not visible, in part or completely, in the cameras field of view. These gestures are defined as Start, Move, Stop and the No-Hand gesture respectively. Also, when the user is in the Move gesture, he can carry out Left, Right, Front and Back movements. For the Left and Right movements, the user will rotate his wrist to the left or right. For the Front and Back movements, the hand will get closer to or further from the camera. Finally, the valid hand gesture transitions that the user can carry out are defined in Fig.5.
The system contains of a camera as the vision sensor connected to a laptop for processing image/video. The laptop is connected to wireless transmitter module which in turn sends data to another receiving platform which receives the data and in turn sends data to the main controller containing a PIC chip, which is interfaced to the robot motors through a motor driver circuit.
Fig.6. Circuit Diagram
-
Working of the system
As shown in Fig. 6 the feed is taken from the user through the webcam. This input image is then given to the computer for processing purposes through serial communication port. Blurring, obtaining RGB values, gray scaling and thresholding is done on this input. Finger count is obtained which is necessary to decide the direction of motion of the robot. The final processed image is given to the RF TX. The TX then passes on the image to the RF RX. The opt couplers used on the RX side help in isolating the further circuitry from any damage due to voltage spikes. The received data is given to the micro controller on the robot which takes the appropriate action as instructed.
-
Trigonometric circular scan
The detected hand image is processed further to extract the skin colour part of palm of the hand. This image then blurred to remove the blobs (unwanted pixels) in that image so that we can get the clear image of the palm of the hand. With the help of trigonometric circular scan we can trace the hand position so that while performing gestures user need not have to worry about the hand position in front of camera.
There are simple formulas for trigonometric circular scan. With help of these formulas the hand position can be detected dynamically.
For calculating x and y co-ordinates of the hand position in circular region
X= Cx + rCOSØ Y= Cy + rSINØ
V. CONCLUSION
The main objective of this project is to recognize a limited set of gestures automatically from hand images. On successful recognition of large number of gestures, a sample application is developed which is driven using these gestures. Here in this case the sample application which is to be used for demo purposes is a simple robot. This application cannot be restricted to a simple robot but we can also expand the algorithm to operate more complex and heavy duty robots as in factories or industries. The algorithm can be extended in a number of ways to recognize a broader set of gestures.
REFERENCES
point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example [1]. Where appropriate, include the name(s) of editors of referenced books.
-
Ching-Hao La. Smart Network System Institute for Information Industry Taipei City, Taiwan. A Fast Gesture Recognition Scheme for Real-Time Human-Machine Interaction Systems. In Preconference on Technologies and Applications of Artificial Intelligence 2011.
-
BojanMrazovac, Milan Z. B jelica, DjordjeSimic, SrdjanTikvic and Itvan Papp, Faculty of Technical Sciences, University of Novi Sad, Novi Sad, Serbia LLC, Novi Sad, Serbia. Gesture Based Hardware Interface for RF Lighting Control. In Proc IEEE 9th International Symposium on Intelligent Systems and Informatics September 8-10, 2011, Subotica, Serbia.
-
Abhinav Sharma, Mukesh Agarwal, Anima Sharma, Sachin Gupta. SIXTH SENSE TECHNOLOGY. In Proc International Journal on Recent and Innovation Trends in Computing and Communication. 2013.
-
S. Sadhana Rao Electronics and Communication Engineering, Anna University of Technology, Coimbatore Jothipuram Campus, Coimbatore, Tamilnadu, India. Sixth Sense Technology. In Proc Proceedings of the International Conference on Communication and Computational Intelligence 2013
-
Ms. Prajakta M. Patil, Assistant Professor of E & TC Engineering, Prof. Y. M. Patil, ADCET, Ashta, India KIT, Kolhapur, India. Robust Skin Colour Detection and Tracking Algorithm. In Proc. International Journal of Engineering Research & Technology (IJERT) 2012.
-
Jure Kovac, Peter Peer, and Franc Solina, University of Ljubljana, Faculty of, Computer and Information Science, Trzaska 25, SI-1000 Ljubljana, Slovenia. Human Skin Colour Clustering for Face Detection. In Proc 2010.
-
Haoyun Xue, Shengfeng Qin School of engineering and design Brunel University London, United Kingdom. Mobile Motion Gesture Design for Deaf People. In Proc of the 17th International Conference on Automation & Computing, University of Huddersfield, Huddersfield, UK, 10 September 2011.
-
Amit Kumar Gupta and Mohd. Shahid, Pdm College Of Engineering Bahadurgarh Haryana. The Sixth Sense Technology. In Proc of the 5th National Conference; INDIACom-2011 Computing For Nation Development, March 10 11, 2011.
-
Kevin R. Wheeler. Device Control Using Gestures Sensed from EMG. In Proc IEEE International Workshop on Soft Computing in Industrial Applications Binghamton University, Binghamton, New York, June 23-25, 2003.
-
C. Schuldt, I. Laptev, and B. Caputo, Recognizing human actions: a local SVM approach. In Proc. of the International Conference on Pattern Recognition, Vol. 3, IEEE Computer Society Press, Cambridge, UK, pp. 3236, 2004.