Design and Development of Input Device Keyboard and Paint Application using Sixth Sense Technology

DOI : 10.17577/IJERTV4IS030895

Download Full-Text PDF Cite this Publication

Text Only Version

Design and Development of Input Device Keyboard and Paint Application using Sixth Sense Technology

Chandana K R

Department of Electronics and Communication Don Bosco Institute of Technology

Bangalore, India

Jai Prakash Prasad

Manjunatha V G

Department of Instrumentation and Technology Dayananda Sagar College of Engineering Bangalore, India

Department of Electronics and Communication Don Bosco Institute of Technology

Bangalore, India

AbstractMiniaturization of computing devices helps us to be in continuous touch with the digital world. Sixth sense technology is the emerging trend. Many applications can be developed using sixth sense technology which helps to overcome the dependency on traditional hardware input devices like keyboard, mouse, etc. Restriction of information on traditional platforms like hardware input devices will be reduced considerably. This paper proposes the design and development of the input device keyboard and paint application using hand air gestures. From hand air gestures we can control any application which leads a path way to human computer interaction.

Keywords Matlab2013, webcam, colour bands.

  1. INTRODUCTION

    We have miniaturized versions of computer that connects our digital world with the physical world. But this will not effectively help us to overcome the traditional storage devices. In this pace sixth sense technology helps us to overcome this disadvantage.

    Sixth sense is a wearable gestural interference that makes us of hand air gesture to control the application. Using this technology we can overcome the use of input devices like keyboard, mouse, etc. We can also develop many applications like painting, music player control, power point slide control etc.

    The proposed method deals with the enhancement of human interaction with the digital world. Restriction of information on traditional plat forms like paper, digital screen etc. is over come with the help of this technology. Dependency on traditional hardware input devices like keyboard; mouse etc.

    will be reduced considerably, thereby allowing portability. It makes use of hand gestures/movements to feed to a computer or any other digital device.

    The method implements a sixth sense technology based design and development of input devices using hand air gestures. From hand air gestures we can control any applications and develop a human computer interface which avoids computer input peripherals. There are two types of hand air gestures, the static air gestures and dynamic air gestures. The key features of this technology include media player volume control, PowerPoint slide control, camera control, scrolling of mouse, initiation and termination of call.

    HCI is one of the important area of research were people try to improve the computer technology. Nowadays we find smaller and smaller devices being used to improve technology. Vision based gesture and object recognition are another area of research. A simple interface like embedded keyboard, folder keyboard and mini-keyboard already exists in todays market. However, these interfaces need some amount of space to use and cannot be used while moving. Touch screen are also globally used which are good control and are being used in application. However touch screen cannot be applied to desktop systems because of cost and other hardware limitations. By applying vision technology, coloured substance, image comparison technology we have implemented the paint application and keyboard application by natural hand air gestures.

    The objective of the proposed paper is summarized as follows:

    • To develop a new technology for human computer interfacing.

    • To eliminate the input devices to access the digital system.

    • To make the gesture recognition as person independent.

      Sixth sense helps us in integrating with the real world and make our environment entirely computer. Initially it was a wearable device but now researches are being carried out to overcome the disadvantage of wearing the hardware. Here in this paper we have approached only with a color band eliminating the wearable hardware. This development leads to the ease of use of technology.

      The existing technology using sixth sense technology is:

    • Augmented reality

    • Gesture recognition

    • Computer vision

    • Radio frequency identification

      The proposed method deals with the enhancement of human interaction with the digital world. Miniaturization of computing devices allows us to be in continuous touch with the digital world. Restriction of information on traditional platforms like paper, digital screen etc. is overcome with the help of this technology. Dependency on traditional hardware input devices like keyboard; mouse etc. will be reduced considerably, thereby allowing portability. It makes use of hand movements/gestures to feed input to a computer or any other digital device.

    • Virtual input is a user input; in these each input are pre-defined and each input has a specific significance.

    • Camera is used to record the hand air gestures given by the user i.e., the user input.

    • Computing device will match the user input i.e., the gestures given by the user with defined user gestures in the computing device.

    • For every user gesture a function will be defined were each function is responsible to perform the specified actions. Keeping all these actions we will develop application.

    • Display device visualizes the developed application.

      In this paper we have achieved successfully paint application and keyboard application. The paper encloses the details of methodology and results regarding the two applications achieved.

  2. LITRATURE SURVEY

    In [1], a non-instrumented gesture recognition interface, which differentiates pointing and clicking and five static gestures, has been developed. Advantage of this approach is the non-intrusive nature of recognition, as there is no need for the user to wear any hardware the negative side, the hand has to be permanently visible inside the users personal field of view (FOV) for recognition.

    In [2], in order to smooth the observation with variable length, a spline interpolation is applied. Here after an unknown gesture can be identified by finding the hidden

    markov model (HMM) with the highest likelihood. For the training phase of reference vectors and HMMs, a set of several samples from ten people has been gathered. Thus, the resulting gesture recognition system can be considered as person independent. Due to its low dimensionality and marginal pre-processing the entire recognition is running in real-time and has a very low latency so that it can be used on-line within the demonstrator.

    In [3], instrumented systems basin on non-optical tracking system are used to eliminate the necessity of visible hands, but entail that additional hardware has to be worn by the user. As non- instrumented approach does not depend on additional hardware, worn by the user, the interaction with the AR system is not limited. But this approach leads to the necessity of the visibility of the users hand during the interaction. When the users hand within the cameras view is lost, no gesture information is available, which disables the user to interact with the AR system.

    In [4], the All See system is an innovative step towards the development of gesture recognition for human-digital interaction. Further development in this field was carried out in MIT Med Lab which developed new applicaions from this technology and coined the term wear your world (WUW) . The limitation of the earlier prototype was that it consisted of a helmet with a large projector mounted on it which caused the problem, which was overcome by neck worn pendent prototype.

    In [5], the authors give a new approach for movement of mouse and implementation of its function using a real time camera. Here the author proposes to change the hardware design. Most of the existing technology mainly depends on changing the mouse parts features like changing the position of tracking ball and adding more buttons. We use a camera, coloured substance, image comparison technology and motion detection technology, to control mouse movement and implement its function like right click, left click, scrolling and double click. A simple interface like embedded-keyboard, folder-keyboard and mini-keyboard already exists in todays market. However, these interfaces need some amount of space to use and cannot be used while moving. Touch screen are also globally used which are good control interface and are being used in many applications. However, touch screens cannot be applied to desktop systems because of cost and other hardware limitations. By applying vision technology, coloured substance, image comparison technology and controlling the mouse by natural hand gestures, we can reduce the work space required. In this work, it proposes a novel approach that uses a video device to control mouse system properly. This is a new approach for controlling mouse movement using a real time camera. Most existing approach involves changing the mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, here it is proposed to change the hardware design. The method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control

    mouse tasks. But the accuracy achieved was not to that extent.

    In [6] the author proposes a virtual keyboard. This device will replace a physical keyboard with a customizable keyboard printed on a standard A3 size paper whose keystrokes are read and translated to real input. In addition buttons on the fly to give a new keyboard layout using a GUI we built in mat lab and then transferring the data to the device using a computers serial port. With rapid growth and latest advances in computer vision field in terms of processing capability and cost effectiveness, vision based human computer interface which offers contact interaction with a device becoming a promising field. In virtual environment, instead of using conventional keyboard, virtual keyboard is intuitive, immersive and cost effective HCI device. This paper presents an implementation of customizable virtual keyboard which appoints fingertips of the user as the intuitive input device by image processing made as one of its kind mat lab program. This approach enables us to give inputs, instructions and commands to the computing devices by using the most natural, intuitive and convenient ways which is to employ human body such as hand as an input device.

  3. METHODOLOGY

    Display interface

    Computing device

    Camera

    The problem we are attempting to solve is to develop a technology called 6th sense technology to eliminate the input devices and use hand air gestures as input means to create a human computer interface.

    I/p

    Fig 1: The proposed working

    Fig 1 shows the basic block diagram of the method attempted in achieving the development of input devices using sixth sense technology.

    Here the input given is the hand air gesture. The hand air gesture is captured by the camera which is interfaced with the Mat lab. Each hand air gesture is pre-defined. The captured hand air gesture is matched by the computing device and the desired action is displayed by the display interface.

    Fig 2 shows the flowchart for the code flow of the applications achieved.

    Algorithm:

    1. Capture image through webcam

    2. We can use three colours RGB to give the gesture input here for paint and keyboard application we have chosen RED filter for the given image.

    3. Once the webcam captures the image in that unwanted red components are filtered out.

    4. Finger in which we are wearing the red band is considered as a useful signal and other region is considered as noise that is red band region in the finger is retained.

    5. Centroid of the input image is mapped with display resolution; here the display resolution is 640*480.

    6. One dot is created that dot will go continuous to create a line.

    Code:

    • Select cam device in Matlab.

    • Configure cam device with matlab.

    • Extract the RED plane from the captured image

    • Subtract the current image frame from red plane and apply median filter to discard small areas.

    • Get the information of the remaining BLOBS ( Binary large objects) and find the maximum BLOB out of it, from the maximum BLOB obtained find the centroid.

    • Map the centroid to video resolution to form paint aperture, here the paint aperture is formed as dot.

    • Similarly for keyboard application the background for keyboard is created and mapped with the matlab

    • For each key a centroid is specified

    • When the red filter applied is mapped with pre-defined centroid alphabet corresponding to that particular centroid is displayed.

    • Display interface visualises the output.

      Fig 2: flow chart for code flow

  4. RESULTS

Fig 3. Output1 for keyboard application

Fig 4. Output2 for keyboard application

Fig 5. Output3 for keyboard application

Fig 6. Output1 for paint application

Fig 7. Output2 for paint application

Figures 3,4, and 5, gives the output screen of keyboard application fig 3 gives the details of the centroid mapping of the background keypad. Fig 5 shows how the text appears when the centroid is mapped with the particular text.

Similarly figures 6 and 7 shows the output screen for the paint application, fig 7 shows how the web camera is mapped with the matlab once the its mapped with the graphical user interface with the matlab the web cam recognises the movement of the hand gestures and the dot created goes continues and the image is formed.

Advantages:

  • Sixth sense device is highly compatible and portable.

  • It supports multi touch and multi user applications.

  • It eliminates helps in eliminating the traditional input devices like mouse, keyboard etc.

  • It provides a connectedness between the real world and the digital world

    Disadvantages:

    • Hardware limitations of the device we currently carry with us.

    • Many phones will not allow the external camera feed to be manipulated in real time.

    • Post processing can occur.

    • Concerns about pricing and privacy. Applications:

  • Sixth sense technology helps in connecting with maps.

  • Getting information regarding various things.

  • Interacting with the physical world and objects.

  • It can also be extended to various applications like mouse less, power point slide control, music player volume control etc.

REFERENCES

[1]. Stoerring, M.Moeslund, T.B.Liu, and Granum computer vision- based gesture recognition for an augmented reality interface. In 4th IASTED, international conference on visualization, Imaging and Image processing (2010).

[2]. F. Wallhoff, M. Zobl, G. Rigoll Action segmention and recognition. In: proceedings on IEEE international conference on image processing (2012) VOL77(2).

[3]. E. Kaiser, A. Olwal, D. Mcgee, D. Benko, H. Corradini, A. Lix, P. Cohen, P. Feiner mutual dis ambiguation of 3D multi modal interaction in augmented and virtual reality. In: proceedings of 5th internationl conference on multi modal interferences(2008).

[4]. Kazim Sekeroglu Virtual mouse using a webcam. In: Proceedings on IEEE international conference on image processings(2011)VOL3.

[5]. Hojoon park A method for controlling mouse movement using a real-time camera.

[6]. A. Erdem, E. Yardimci, Y. Atalay, V. Cetin speech and signal processing 2002 proceedings IEEE international conference.

Leave a Reply