Hand Gesture Recognition Techniques for a Wireless Robotic Arm using Calculus

DOI : 10.17577/IJERTCONV5IS04006

Download Full-Text PDF Cite this Publication

Text Only Version

Hand Gesture Recognition Techniques for a Wireless Robotic Arm using Calculus

K. Balaji[1] , T. Meenal[2] , A. Dheetchitha[3], R. Keerthana[4]

Assistant Professor [1][2] , UG Scholars [3][4],

Department of Electronics and Communication Engineering,[1] [2] [3] [4]

Kongunadu College of Engineering and Technology, Trichy, Tamilnadu.

Abstract: The increase in human-machine interactions in our daily lives has made user interface technology progressively more important. Physical gestures as intuitive expressions will greatly ease the interaction process and enable humans to more naturally command the machines. Here the movement of hand controls the movement of the robotic arm (left, right, up, down). The hand gesture is captured by two techniques, vision based and accelerometer based. The accelerometer used is a three axes accelerometer which captures the hand gestures. Vision based gesture is processed in Matlab platform. Both gestures will be used to control a wireless robotic arm.

Index terms- Gesture recognition, interactive controller, MEMS accelerometer, image processing

  1. INTRODUCTION

    Many special devices have been designed for the purpose enhancing the communication between humans and machines. Every new device can be seen to make the computer more intelligent and making humans able to perform easier communication with the machines. With the emergence of every new product in the market; it attempts to ease the complexity of jobs performed. For instance, it has helped in facilitating tele-operating, robotic use, better human control over complex work systems like cars, planes and monitoring systems.

    The idea is to make machines to be controlled by human and develop user friendly human machine interfaces [7]. For this MEMS technology is used. MEMS are Micro- Electro-Mechanical Systems, which convert energy from one form to another. There are many sensors available in MEMS. Accelerometer is a MEMS based chip, which detects even the slightest vibration or acceleration.Tilting of the accelerometer can be said as a gesture[6]. Gestures are non-verbalinformation communicated to a computer. A person can perform any gesturesof his choice. Gesture recognition is the process of recognizing and interpreting a sequential gesture from the given set of input data. Many kinds of existing devices can capture gestures, such as a Wiimote. Some are also used to provide input to any device. There are mainly two existing types of gesture recognition methods, i.e., vision-based and accelerometer and/or gyroscope based.

    Robotic arm used in industries are controlled by many aspects. It is controlled via a computer, joystick, trackball, mouse, etc. Controlling it using hand gestures can enhance

    the interaction between the humans and the machines. We can make any precise control of the robotic arm based on the environment being used. Human cannot work in a high temperature or dangerous places. For this purpose wireless implementation is also included.

  2. GESTURE RECOGNITION

    Gesture recognition refers to the process of understanding and classifying movements of the hands, arms,etc. Hand gestures are the most natural way of commanding systems. It has become a great feature in developing artificial intelligent human-computer interfaces.

    Sensor based: One method of recognizing gestures is sensing the hand motion, (velocities or accelerations). This can be done by using sensing devices attached to the user. A 3-axis MEMS accelerometer is the sensing device utilized to acquire the data. Gesture recognition based on data from an accelerometer is an emerging technique for gesture-based interaction.

    Vision based: Another technique is based on vision. Here, we focus our attention to vision-based recognition of hand gestures by fingertip detection to operate human robot interaction[3]. Most of the complete hand interactive systems can be considered to be comprised of three layers: detection, tracking and recognition. The detection layer is responsible for defining and extracting visual features that can be attributed to the presence of hands in the field of view of the camera. The tracking layer is responsible for performing temporal data association between successive image frames. Last, the recognition layer is responsible for grouping the spatiotemporal data extracted in the previous layers and assigning the resulting groups with labels associated to particular classes of gestures.

  3. HARDWARE DESCRIPTION The hardware components used are:

    • Controller : ARM 7 LPC2129 and PIC with Power Supply

    • Sensor : ADXL-335 Accelerometer

    • Robotic Arm unit

    • Web cam

    • Zigbee

    The LPC-2129is based on a 32-bit ARM7TDMI-S with 256 kB of embedded high-speed flash memory. A 128-bit wide memory interface and unique accelerator architecture enable 32-bit code execution at maximum clock rate. ARM controller is used mainly for its speed at 60Mhz, due to need of quick response. The ADXL335 is a small, thin, low power, complete 3-axis accelerometer with signal conditioned voltage outputs. The product measures acceleration with a minimum full-scale range of ±3 g. It can measure the static acceleration of gravity in tilt-sensing applications. It has a low power of 350 A and Single- supply operation with 1.8 V to 3.6 V. the shock absorption capacity is 10,000 g. The robotic arm is built with the gear motors. A motor for rotation in left and right, one for up and down movement. It is controlled via the PIC controller which receives the signal from the zigbee device.Thezigbee is a wireless is a low-cost, low-power, wireless mesh network standard working in 2.4 GHz band worldwide. The webcam is used for image based recognition.

    Figure 1. Recognition Section

    Figure 2. Robotic Arm Section

    Figure3. Robotic Arm Section

    Figure 4. Recognition Section

  4. WORKING PRINCIPLE

    In the proposed system, gesture recognition technique based on accelerometer and Image processing is used. A switch is used to select the mode in which we are going to operate the robotic arm unit. A single pole double throw switch is used for this purpose. If pushed to one side mems based technique is selected. On the other side vision based technique is selected. Sensor part used here is a ADXL-335 MEMS accelerometer. Its a 3-Axes accelerometer which captures the hand movements in 3 dimensions. Vision based recognition is done in the system using matlab. It captures the movements of the fingertip on all four directions. These movements are processed and the output is sent via the RS-232 interface. Both these inputs will be processed in the LPC 2129 Microcontroller. Based on the mode select switch the Robotic arm is controlled either using accelerometer or processed image.

    In accelerometer mode the sensor value is read by the microcontroller and the recognized gesture is sent to the robotic arm. The robotic arm unit is built to operate in four directions. Left and right together with forward and reverse can be operated. The robotic arm unit is made wireless to be operated in non-human interaction environments. The appropriate positions of the accelerometer in all the 4 directions (left, right, up and down) are noted for the sensor used. These values are set in the program to operate the arm unit.

    In the vision mode, the movement of the fingertip is used to control the robotic arm. For this purpose the image is captured and recognized using a webcam and a PC with matlab. The recognitionis based on fingertip detection and movement.Its starts getting video for partcular time period set. From that a set of background image and input image is recorded. The region properties are calculated for the two images by using specific properties. The appropriate direction of movement of the robotic arm is sent to the robotic arm section. At the receiving section the controller moves the robotic arm in the direction as specified. The movement recognized by the controller is displayed in the LCD as shown in FIG-5.

    Figure 5. Result of Sensor Movement

    Figure 6. Result of Matlab working

  5. ADVANTAGES AND DISADVANTAGE Advantage:Usually the programming of the

    robotic arm for a specific application is a very tedious job. This project helps a non-expert robotic arm user to easily

    control the arm in any application. There are choices for the user to select which mode he can select based on his interest. Like the sensor based or the vision based. This also increases the human-machine interaction.

    Disadvantage: The accelerometer used is a basic one measuring the tilt actions. Some other higher accelerometer can be used for accurate results. Vision based technique is not that effective. The finger has to be close enough to the webcam for the image to be recognized.

  6. CONCLUSION

    Here we have seen through the two recognition techniques. Sensor based using the mems accelerometer detects the hand gestures. Motion in all the three axis is taken in considered and these values used detect the movements of our hand positions. In vision based technique image processing is used. The movement of the finger is captured. The color and the shape is segmented for recognition. The recognized image is processed using MATLAB for the movement of our fingertip.

  7. FUTURE ENHANCEMENT

    In the future enhancement these recognition techniques can be used in any type of application. It enhances the interaction between the humans and the computer. GSM module can be used to make control of the application at faraway places.

  8. REFERENCES

  1. S. Zhou, Q. Shan, F. Fei, W. J. Li, C. P. Kwong, and C. K. Wu et al., Gesture recognition for interactive controllers using MEMS motion sensors, in Proc. IEEE Int. Conf. Nano/Micro gineered and Molecular Systems, Jan. 2009, pp. 935940.

  2. N. C. Krishnan, C. Juillard, D. Colbry, and S. Panchanathan, Recognition of hand movements using wearable accelerometers, J. Ambient Intell. Smart Environ., vol. 1, no. 2, pp. 143155, Apr. 2009.

  3. Q. Chen, N. Georganas, and E. Petriu, Real-time vision- based hand gesture recognition using Haar-like features, in Proc. IEEE IMTC, 2007, pp. 16.

  4. L. Yun and Z. Peng, An automatic hand gesture recognition system based on Viola-Jones method and SVMs, in Proc. 2nd Int. Workshop Comput. Sci. Eng., 2009, pp. 7276.

  5. L. Shi, Y. Wang, and J. Li, A real time vision-based hand gestures recognition system, in Proc. 5th ISICA, 2010, pp. 349358.

  6. Xu, Ruize,Zhou, Shengli,Li, Wenjung Jung,MEMS Accelerometer Based Nonspecific-User Hand Gesture Recognitionin Sensors Journal, IEEE, Volume: 12 , Issue: 5 Page(s): 1166 1173, May 2012

  7. S. Zhou, Q. Shan, F. Fei, W. J. Li, C. P. Kwong, and C. K. Wu et al., Gesture recognition for interactive controllers using MEMS motion sensors, in Proc. IEEE Int. Conf. Nano/Micro Engineered and Molecular Systems, Jan. 2009, pp. 935940

Leave a Reply