Head Tracking and its Applications in HCI Framework

DOI : 10.17577/IJERTV4IS030367

Download Full-Text PDF Cite this Publication

  • Open Access
  • Total Downloads : 313
  • Authors : Ankit Hatwalne, Dnyaneshwar Devkate, Bhagyashwari Biradar, Dipika Nikam, Prof. Dr. Sulochana Sonkamble
  • Paper ID : IJERTV4IS030367
  • Volume & Issue : Volume 04, Issue 03 (March 2015)
  • DOI : http://dx.doi.org/10.17577/IJERTV4IS030367
  • Published (First Online): 14-03-2015
  • ISSN (Online) : 2278-0181
  • Publisher Name : IJERT
  • License: Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License

Text Only Version

Head Tracking and its Applications in HCI Framework

Ankit Hatwalne

Department of Computer Engineering JSPM NTC Pune, India

Bhagyashwari Biradar

Department of Computer Engineering JSPM NTC Pune, India

Dnyaneshwar Devkate

Department of Computer Engineering JSPM NTC Pune, India

Dipika Nikam

Department of Computer Engineering JSPM NTC Pune, India

Prof. Dr. Sulochana Sonkamble Head of the Department JSPM NTC Pune, India

AbstractTraditional mouse and keyboard interaction is suitable for few applications like word processing but for drawing and designing, mouse becomes more difficult. As any user can draw a simple process on paper which can be implemented on desktop by interactive user interface and also creating infrared tracking interface like touch screen but more affordable than it. Further, this application provides extension to gesture control by other application. Major existing systems with widespread implementations are available but are limited to: Optical mice, capacitive touch panels, resistive touch panels, surface acoustic wave touch panels, etc. Touch panels move one step closer towards enhancing the interactivity of computing devices; however they have their own shortcomings. We present an alternative to the aforementioned traditional pointing devices which makes use of infrared radiations as a means of propagating user interactions between the computers and the users. The paper proposes a framework for a pointing device based on tracking of infrared light and determining its relative position and motion with respect to the computer system or device. The use of targeted infrared radiations as a medium of location information delivery forms the basis of the proposed framework.

KeywordsInfrared Tracking,HCI,HCI Framework,Natural User WII Interface,IHCI.

  1. INTRODUCTION

    Human computer Interaction is a technique which helps you build human-centered design skills, in order to enhance the principles and methods to create excellent interfaces with any host computer.

    Existing computer systems generally adopt an indirect approach to user interaction. This is due to the use of standard desktop-oriented devices, such as a mouse on a desktop. A more natural method of interaction with a computer is needed to improve user efficiency, health hazards and monotony generally associated with operational use of computing systems and environments. Our goal behind working on this paper is designing a system that allows users to interact with objects on the display naturally and easily. A HCI framework that attains a degree of closeness as much as possible with the real world objects and environments present around humans.

    Increasing availability of game based technologies together advances in Human Computer Interaction and Usability Engineering, which provides new challenges to virtual environments for their utilization in e-Teaching. Consequently, the goal is to provide learners with the equivalent practical knowledge learning experiences, whilst, at the same time, supporting creativity for both teachers and learners. Current market surveys showed that the Wii remote controller (Wii mote) is more wide spread than standard Tablet PCs and is the most used computer input device worldwide.

    In this paper we discuss the importance of head tracking and also gestures for teaching and describing the design and development, of a low-cost demonstrator kit for the Wii mote, in order to demonstrate that gestures and head tracking can enhance the quality of the lecturing process.

  2. PREVIOUS WORK AND LITERATURE SURVEY

    A signicant amount of work has been done in the area of human-computer interaction, 3D point recognition, and natural user interfaces. A 3D hand recognition system was presented in 2009 (Wang and Popovi´c, 2009) and efforts are under way to develop a 3D user interface similar to the one seen in the movie Minority Report (Underkofer, 2010). There has even been some previous experimentation with 3D interaction in Spiegel (Bak, 2004). Several projects have explored the use of Wii Remotes for stereo-vision (Dehling, 2008), motion capture (Wang and Huang, 2008), and nger tracking (Lee, 2008).

    However, using MATLAB limits the audience, afford- ability, and exibility of the software. Some previous papers tracked multiple points in 3D, but did so only in the context of head tracking (Cuypers et al., 2009). Head tracking assumes a limited range of motion of the points, making it easier to distinguish them. Other projects involved only minimal error checking (Hay et al., 2008). In contrast, IHCI is able to track two points under more general conditions.

    The Current Implemented algorithm described above senses one point. However, reading two points introduces ambiguity when both points lie in the same y plane. This means that there are no surrounding visual aids to help distinguish be- tween the two points. Because of this limited information, it can be impossible to tell which point from the second Wii Remote corresponds to the point seen by the rst Wii Remote in certain situations. When the points are not in the same y plane the ray collision error can be used to match the correct points. To distinguish between these points both possibilities are tested and the pair with the smaller error is used. This method does not work when the points are in the same y plane because it appears to be valid.

    We leverage the knowledge that the leftmost point on the rst camera should be paired with the leftmost point on the second camera in most situations. When we cannot distinguish the points using the above methods, we use the fact that the Wii Remotes return the points that they detect in the same order throughout a session, and assume that they have not changed. This can be a problem if a camera stops sensing points and then ips the order it senses them in. In practice however, these issues with multiple points remain largely unnoticed because the points are constantly being updated.

  3. PROBLEM DEFINITION WITH JUSTIFICATION

    The objective is to create GUI which can calibrate the IR points through IR sensor then capitalize it with same to give the resulting output. Objective could be further enlarging to design and implement more gestures, which would expand control over WII signicantly. The expansion of the gesture library could be aided by tracking only one point at a time. Currently the Wii Remote cameras must be placed in a close approximation to the orientation specied by the user in software, in order for the gesture recognition code to work correctly. A camera calibration method could be written, allowing the Wii Remotes to be placed at any angle and any distance apart. Research is also needed in order to quantify the differences and advantages to using a 3D gesture system over a traditional system.

    Fig 1:- Communication between Wii mote and Window.

    Considering existing solutions or methods, it is desired to create a complete free & open source, multiplatform, interactive head tracking system that can be utilized at the relatively low cost of its available hardware components. This can be used in classrooms for collaborative working environment to complement the classroom activity whenever necessary. Also this system can be used in games for fast realization of target.

    Hence the project allows user to interact via Wii. Wii Mote Lib allows you to connect a Wii Mote to your PC and communicate with it using .NET. Nintendo's Wii Remote (forever known as the Wii Mote) is a fantastic little controller for the Nintendo Wii system. Because it uses Bluetooth o communicate with the Wii, it can be connected to and used by practically any Bluetooth capable device.

    Fig 2:- Basic Implementation of System.

  4. PROPOSED SYSTEM

    For understanding the user perception with multiple movements they must be recorded.

    These movements then compare with actual system for designing the calculation of user perception. In head tracking user perception can be matched with 3 basic head moments

    And 3 basic translation

    (Mouse X-) (threshold 80%)

    If translation Y

    If Head up translation

    Mouse down

    (Mouse Y-) (threshold 80%) If Head Down translation

    Mouse Up

    (Mouse X-)(threshold 80%)

    Mouse threshold is maximum movement that can be track.

    Proposed system named as New Relative Point (NRP) designed on following algorithm.

    Step 1: Start

    Step 2: Define the number of monitor.

    Step 3: If number of monitor is less than or equal to one then

    Disable YAW and ROLL movement And Go to step 4

    Else

    Disable ROLL movement And go to Step 5

    Go to Step 6

    If translation Z

    If Head zoom in translation

    Screen zoom out

    (Mouse X+) (Threshold 80%) If Head zoom out translation

    Screen zoom in

    (Mouse X-) (Threshold 80%)

    Step 5: (Multiple Monitor Screen Model defining the screen centre of the screen number (n+1)/2 pixel c (0, 0))

    If Pitch

    If Head Down

    Mouse Up

    (Mouse Y+) (Threshold 50%) If Head Up

    Mouse Down

    (Mouse Y-) (threshold 50%)

    Step 4: (Single Monitor Screen Model defining the screen centre of the screen c (0, 0))

    If Pitch

    If Head down

    Mouse Up

    (Mouse Y+)(Threshold 50%) If Head up.

    Mouse Down

    (Mouse Y-) (threshold 50%)

    If Translation

    If translation X

    If Head left translation

    Mouse Right

    (Mouse X+)(threshold 80%) If Head Right translation

    Mouse Left

    If Yaw

    If Head Left Label 1

    Select Monitor Left

    If Head left translation

    Mouse left

    (Mouse X-) (Threshold 0%) Go to Label 1

    If Head Right translation

    Mouse Right

    (Mouse X+) (threshold 0%) Go to Label 2

    If Head Right Label 2

    Select Monitor Right

    If Head left translation

    Mouse left

    (Mouse X-) (Threshold 0%)

    Basic Gestures

  5. APPLICATIONS

    If Translation

    Go to Label 1 If Head Right translation

    Mouse Right

    (Mouse X+) (threshold 0%) Go to Label 2

    There are two categories of basic gestures implemented at this time: pinch and swipe. A pinch gesture is activated when the two points seen by the cameras move close enough together which they appear to be one point. There is also an unpinch gesture that is activated when a pinched point separates back into two points. An unpinch gesture can only be detected after a pinch has taken place, which keeps the cameras from falsely identifying two unrelated points as an unpinch gesture.

    If translation X

    If Head left translation

    Mouse Right

    (Mouse X+) (threshold 0%) Go to Label 2

    If Head Right translation

    Mouse Left

    (Mouse X-) (Threshold 0%) Go to Label 1

    If translation Y

    If Head up translation

    Mouse down

    (Mouse Y-) (threshold 0%) If Head Down translation

    Mouse Up

    (Mouse Y+) (Threshold 0%)

    If translation Z

    If Head zoom in translation

    Screen zoom out

    (Mouse X+) (threshold 0%) If Head zoom out translation

    Screen zoom in

    (Mouse X-) (threshold 0%)

    Step 6: End

    This algorithm is developed for the system implementing virtual Reality with Head tracking.

    The other basic gesture, swipe, is activated when the points move a set distance in any dimension. The movement of either one or two points is tracked depending on how many cameras are seen. Tracking any number of points allows a swipe to be detected regardless of the pinching state. Swipes can be detected in both positive and negative directions.

    Composite Gestures

    By combining the basic gestures discussed above and the current position of the points, application specic gestures can be created. These composite gestures can be very simple, using just one basic gesture to activate some sort of onscreen movement, or as complicated as necessary, making use of several gestures in sequence.

    A rather conventional application would be to map the movement of the points to the cursor position and then pinch/unpinch to a click. The gestures implemented for Spiegel, describes in detail with the sections. Ultimately it provides an example of more complex composite gestures.

    3D HCI Framework

    This feature is used in games for multidimensional view of the system. By head tracking using Wii mote a user can form a spectrum for gaming zone. We have also developed an algorithm to extract the 3D point from the two images acquired by the Wii Remote instead of using proprietary software in order to keep the cost down for anyone expanding upon our project.

  6. CONCLUSION AND FUTURE WORK

    The mapping in the algorithm is geometric, considering the environment, objects and exterior surrounding. Implementing actual reality with VR system require spatial perceptive like first person or third person perception and how it connect with implemented algorithm.

    It is important task to create learning procedure and mapping which further enumerate the importance of Computer as well as human vision to interact with each other. It is also question of learning and model development techniques which further can be developed considering this format with a certain platform to design and develop this technique for implementing actual perception of human user to collaborate a specific application.

    Algorithm uses the API which consist of Mouse look function mostly use in first person shooter games which must developed at user side considering the globalized solution for algorithm.

    Considering the main disadvantage of tracking is only one head or person at a time can tracked. The algorithm performs invalid operations if more than one person comes into focus. Thus the proposed system can track head using Wii mote via Bluetooth as the main feature of the project to display the resulted output on the screen.

  7. ACKNOWLEDGEMENT

    We express our sincerest gratitude towards the college authorities, our guide and Head of the Department Prof. Dr. Sulochana Sonkamble whose invaluable guidance was very helpful in completion of this project.

    This research is developed for the completion of final year project for B.E. (Computer), Department of Computer Engineering, JSPM NTC.

  8. REFERENCES

  1. AMER AL-RAHAYFEH AND MIAD FAEZIPOUR (Member, IEEE) University of Bridgeport, Bridgeport, CT 06604, USA, Eye Tracking and Head Movement Detection.

  2. Amy Ciavolino CSEE, CoEIT, University of Maryland – Baltimore County aci1@umbc.edu,Camille Marvin Department of Computer Science, Harvey Mudd College cmarvin@hmc.edu,Jason Creighton, James Coddington, Hans- Peter Bischof, Reynold Bailey Rochester Institute of Technology. TOWARDS AFFORDABLE GESTURE BASED INTERFACES An Exploration with Wii Remotes.

  3. ARTHUR, K.W., BOOTH, K. S., AND WARE, C. 1993. Evaluating 3D task performance for fish tank virtual worlds. ACM Trans. Inf. System 11, 3, 239265.

  4. Patrick Salamin, Tej Tadi, Olaf Blanke, Fre ´de ´ricVexo,and DanielThalmann. Quantifying Effects of Exposure to the Third and First-Person Perspectives in Virtual-Reality-Based Training.

  5. Design of a Real-Time Gesture Recognition System [I. BarakOzer, Tiehan Lu, and Wayne Wolf] IEEE 1053- 5888/05/©MAY2005IEEE

  6. Gesture Recognition with a Wii Controller [Thomas Schl¨omer, Benjamin Poppinga, NielsHenze, Susanne Boll1]IEEE 64742- 65/05/©MAY2005IEEE

Leave a Reply