Haptic Technology for Daily Life Applications

DOI : 10.17577/IJERTCONV1IS04049

Download Full-Text PDF Cite this Publication

Text Only Version

Haptic Technology for Daily Life Applications

Manan Bakshi1, Deep Kumar2, Pranav Chugp,Mr.Piyush Kumar Pareek 4

,Mrs.Priyanga P5

1 student,Department of Computer Science ,K.S. Institute of Technology Bangalore-62

2 student,Department of Computer Science ,K.S. Institute of Technology Bangalore-62 3 student,Computer Science and Engineering Department

,Arya Institute of Engg. & Tech.Jaipur

4Assistant .Professor, Department of Computer Science, KSIT, Bangalore, Karnataka, India

5Assistant .Professor, Department of Computer Science, KSIT, Bangalore,

Karnataka, India

Email id:piyushpareek88@gmail.com

Abstarct

A new idea about the Haptic technology to interactwith the virtual object and which can be implemented for daily life application. In multi- model interactive system. Haptic feedback computation often comes from contact focus and this can be rendered and given as a feedback (both tactile and force).This paper tries to give an extension about procedure which can be used to implement and hence the feature man-machine interaction can be enhanced. This paper considers the pros and cons of the current scenario and with the help of Kinect technology, the computation is considered and we also suggest algorithm which can be helpful in implementing the idea.

KeywordsHaptics, Kinect, Multi-model interactive system,

Outside-In system, Tactile sensors.

  1. INTRODUCTION

    Haptic refers to the sense of touch and is derived from a Greek word called Haptikos. Haptics is the science of applying touch (tactile) sensation and control to interact with computer application. With the help of the specially designed Input/output devices (joysticks, data/special gloves)users can get a feedback from the computer application in the form of felt sensation in hand or other body parts. With the combination of visual display, haptic technology can be used to train people tasks learning Hand-Eye coordination such as medical application and aircraft simulations etc. It can be used for gamming purpose in which we see as well as feel our interactions with the computer for example, we might play tennis with another computer user somewhere else in the world. Both of you can see the ball moving and with the help of haptic technology we can swing our virtual tennis racket and also feel the impact.

  2. HUMAN MOVEMENT TRACKING TECHNOLOGIES

    A discussion of general issues which must be considered in evaluating human movement tracking systems and a summary of currently available sensing technologies for tracking human movement

    2.1 KINECT TECHNOLOGY

    Kinect is a motion sensing input device by Microsoft

    for the Xbox 360 video game console and windows PCs. Based around a webcam style add-on peripheral for Xbox 360

    console it enables users to control and interact with the Xbox 360 without the need to touch the game controller through a natural user interface using gesture and spoken commands. Kinect developed can recognize only a specific range of gesture and voice commands. User can be interpreted by the system completely hands free by using infrared projector and camera and a special microchip to sense the movements of objects and individuals in 3D. This 3D scanner system caused by Light Coding employs a variant of image based 3D construction.

    The device features on RGB command ,depth sensor and mouth away microphone running proprietary software which provide full body 3D monitor capture, facial recognition and voice recognition capabilities. The depth sensor consists of an infrared laser projector combined with a monochrome CMOS sensor, which captures video data in 3D under any ambient light

    condition.

    Kinect is capable of simultaneously tracking people, including 2 active players for monitor analysis with a feature extraction of 20 joints per players. The Kinect sensors have a practical ranging limit of 1.2-

    3.5 m (3.9-11 ft) distance when used with the Xbox software. The sensor has an angled field of view 570 horizontally and 430 vertically, while the motorized pivot is capable up to 270 either up or down.

    Considering these two technologies and other approaches, we are hence putting forth our idea for implementation. The very first step towards this would be the Human body, as mentioned earlier Human body can be tracked by various techniques we here discuss.

      1. INSIDE-IN SYSTEMS

        Inside-in systems are defined as those which employ sensor(s) and source(s) that are both on the body (e.g. a glove with piezo-resistive flex sensors). The sensors generally have small form-factors and are therefore especially suitable for tracking small body parts. Whilst these systems allow for capture of any body movement and allow for an unlimited workspace, they are also considered obtrusive and generally do not provide 3D world-based information.

        • Many of these technologies do not allow for registration of joint-axial rotation (e.g. pronation/supination of the wrist).

        • All the technologies use body-centered (joint- angle, eye-rotation, muscle-tension) coordinates.

        • There is no external source or reference necessary,

          i.e. the workspace is in principle unlimited.

          Due to the fact that inside-in systems are worn on the body they are generally considered obtrusive.

        • Resolution, static/dynamic range, bandwidth and latency are all limited by the interface circuitry, generally not by the sensors.

        • Most of the technologies have small form- factors and are therefore especially suitable for small body parts (finger, eye, toe). For larger body parts the accuracy of these technologies may be reduced due to body fat.

        1. AVAILABLE TECHNOLOGIES

          • Piezo-resistive flex (Virtex Cyberglove, TCAS Datawear, Mattel/Nintendo Powerglove, Penny & Giles goniometers, GLAD-IN-ART glove, various force-ball device) high repeatability (durable), small

          • Piezo-electric flex (Mulder 1988) medium repeatability (drift), small

          • Fiber-optic flex (VPL Dataglove, W Industries Spaceglove) medium repeatability (low durability due to wear/tear of cracks), computing necessary (nonlinear transfer)

          • Light-tube flex (Sayre glove, see Sturman (1994)) low repeatability (noise), computing necessary (nonlinear transfer)

              1. INSIDE-OUT SYSTEMS

                Inside-out systems employ sensor(s) on the body that sense artificial external source(s) (e.g. a coil moving in a externally generated electromagnetic field), or natural external source(s) (e.g. a mechanical head tracker using a wall or ceiling as a reference or an accelerometer moving in the earth's gravitational field). Although these systems provide 3D

                world-based information, their workspace and accuracy is generally limited due to use of the external source and their form factor restricts use to medium and larger sized body parts.

          • The external source however does provide in most cases 3D, world-based information,

            i.e. joint-axial rotations can be measured.

          • The form-factor is in most cases fairly large so that the technologies usually apply to larger body parts (i.e. not for eye, finger or toe), imply some obtrusiveness and may have limited accuracy due to inertia of the sensor/receptor (the receiver may shift due to skin/muscle movements). Additionally, there will be some offset introduced due to the receiver size.

          • Most of the technologies involve some computing which may increase response atency.

            Resolution, static/dynamic range, bandwidth areaccuracy of 1 mm and 0.01 deg in registered see through applications, 1 cm in nonregistered applications

            • sampling rate >= 60 Hz

            • direct sensing of state and derivatives

        1. AVAILABLE TECHNOLOGIES (Natural Source)

          • Piezo-electric accelerometer (Ladin, 1989 and Doyle, 1993) medium repeatability (need averaged real-time calibration), interference (gravity field), computing necessary (integration of 3D acceleration), fairly bulky, unlimited workspace, expensive

          • Gyroscope (solid state, micromechanical or laser, see Doyle, 1993) medium repeatability (need averaged real-time calibration), small, high latency ?, unlimited workspace, expensive

          • Inclinometer only for 2D orientation in vertical plane, (tilt, angle of inclination), limited range, how to separate acceleration from gravity, small ?, unlimited workspace

          • Magnetic fluxgate compass (Doyle, 1993)only for 2D orientation in horizontal plane, bulky, unlimited workspace, interference from ferromagnetic materials, slow

        2. AVAILABLE TECHNOLOGIES (Artificial Source)

          • Potentiometer / optical encoder, externally attached via mechanical linkage (Shooting Star Technology head tracker, Fake Space BOOM, Immersion Probe, Sutherland headtracker, various mice)high accuracy, high repeatability, low latency (no filtering), bulky, obtrusive/encumbering, small workspace, best useful for free-space movements, compensation necessary for inertia of system

          • DC EM pulse (Ascension Technology 6DOF tracker) high accuracy, medium repeatability (interference from the earth's magnetic field and, less, ferromagnetic materials), medium dynamic accuracy (filtering), computing intensive, small workspace, medium latency

          • AC EM field strength (Polhemus 6DOF tracker) high accuracy, medium repeatability (interference from ferro-magnetic materials), computing intensive, small workspace, medium latency

          • AC EM field, phase coherent high accuracy, medium repeatability (interference from metallic materials), multiple separately located transmitters/receivers

    2.5 OUTSIDE-IN SYSTEMS

    Outside-in systems employ an external sensor that

    senses artificial source(s) or marker(s) on the body, e.g. an electro-optical system that tracks reflective markers, or natural source(s) on the body (e.g. a video camera based system that tracks the pupil and cornea). These systems generally suffer from occlusion, and a limited workspace, but they are considered the least obtrusive. Due to the occlusion it is hard or impossible to track small body parts unless the workspace is severely restricted (e.g. eye movement tracking systems). The optical or image based systems require sophisticated hardware and software and are therefore expensive.

    • These tracking technologies are generally the least obtrusive of movement tracking technologies.

    • Video camera-based technologies are limited by occlusion. For movements of larger body parts this may be solvable, but for e.g. fingers, two closely interacting hands, or two closely interacting persons it remains a major problem.

    • Video camera-based technologies are computing intensive due to difficulties with staying locked onto the body part or marker and/or the involved transformations of data, so that response latency may be high (especially relevant for eye-tracking).

        1. AVAILABLE RECHNOLOGIES(Natural Source)

          • Body image analysis very computing intensive, low repeatability (computer vision technology not reliable)

          • Pupil tracking (ISCAN RK426, LC technologies Eyegaze, Micromeasurements

    system) as above, only for eye

    • 1st-4th Purkinje trackers for eye only, bulky

    • Scleral and limbus reflection for eye only, low repeatability (noise, drift and artifacts due to movements of eyelids, etc)

        1. AVAILABLE TECHNOLOGIES (Artificial Source)

          Reflective marker (BTS ELITE, Adaptive Optics Associates Multi Trax, Oxford Metrics VICON, Charnwood dynamics CODA, Optikon MacReflex, Peak Performance technologies PEAK5, HCS Vision Technology Primas, Motion Analysis ExpertVision 3D, Yaman Optfollow 7100, Hentschel HSG, Optron 5600, Origin Instruments head tracker) medium reliability (marker identification errors), not for small body parts (marker identification errors) [also for eye ?]

          • Infrared LED (Northern Digital Optotrak, Log- In COSTEL, Selspot SELSPOT, Honeywell LED array helmet tracker, United Detector Technology Instrument Group Opeye) medium reliability (errors due to ambient infrared radiation, marker reflections and geometric errors), medium latency (LED sequencing)

          • B/W marker pattern (Honeywell video metric helmet tracker)

          • Colour marker pattern (Columbus Instruments Biovision, Dorner 1994)

    2.5 OROURKE AND BADLER SYSTEMS

    According to ORourke and Badler, human body can be segmented into 24 rigid segments and 25 joints. Using these segments and joints, a very elaborate model can be constructed for motion analysis. Overlapping sphere primitives can be used to define the surface of each body segment. This approach uses predefined models for motion analysis and body structure recovery.

    Various motion constraints of human body parts are measured and a coordinate system is embeddedin the segments along with these constraints. This model has 4 main processes:

    Prediction Simulation Image analysis Image

    Analysis:-

    The image analysis part traces the location of body parts on the coordinate system. If any previous prediction

    results are available, the image analyser makes use of these results to reduce the range of the predicted 3D location of body parts.

    Figure: The overview of ORourke and Badlers system.

    Parser:-the parser transforms the location-time relationships, traced by image analyser, into linear functions.

    Prediction: – this phase makes use of determined linear functions to predict the position of parts in the next frame.

    Simulator:-a simulator is originally embedded with an extensive knowledge of human body parts. It translates the results of prediction phase into corresponding 3D regions.

    The system can be more efficient if we can implement our idea:

    The image analyser will run parallel with all the process. The results from the predictor, for the next frame, will be compared with the results of image analyser. If these results have some degree of compatibility, these will be used to further reduce the range of predicted 3D location of body parts, thus, increasing the efficiency. Else, the results from the predictor will be discarded and image analysis results will be sent to the parser.

  3. USING TACTILE SENSORS AND OUTSIDE-IN SYSTEMS

    Combining the different technologies, we came up with the idea of making the haptic technology much more easily available to an everyday user. Since this can be done with the minimum cost and the feedback received will be tactile which will be much better when compared to conventional graphical user interface.

    The haptic technology can be made available for majority of applications. The main idea about this system is to use the human body movement tracker, and by digitalising the results which are fed to an artificial intelligence systems, which can interpret these results

    and send the processed results to an output system.

    Here the output system might be a computer with a visual graphical interface or to a robot which is assigned to replicate the human body movement according to the desired application, just as the haptic feedback is given to the gaming controller in the video games

  4. APPLICATIONS

    The emeging field of haptics has tremendous scope in the coming future, the technology has gained a lot of attention in recent times and yet it is in its infantry stage.

    1. GAMING:

      A major eye turner would be the application of haptics in the gaming world. By designing a suitable suit or a glove or any such sensor which is suitable for the game, the gaming experience can be made real. The future of the gaming world will be taken over by the haptics soon as this can not only display the images but also give a tactile feedback to the user.

    2. COMPUTERS

      Haptics technology can be used specially to create a three dimensional virtual world and this when compared to the normal graphical user interface given by the computer display unit is far better as this technology can give a feedback such as pressure, resistance which a common graphical interface cannot give. By making use of this, a user can sit at one side of the computer and experience the real object which is on the other side of the system virtually.

    3. MEDICAL FIELD

      The technology has raised another system called the telepresence, this is specially used in the

      medical field. With the help of haptics a doctor at some remote area can operate a patient with high accuracy using a 3D video and robotic arm which is given haptic control. This can be implemented and the operation can be done without the doctor being present at the place

    4. AIRCRAFT SIMULATION

      This technology can also be used in the areas where a human cannot be given to operate a system without the experience of it before such as flying an aircraft. Under such situation haptic technology can give a feedback similar to the real experience and hence help in the learning experience of a pilot.

    5. VIDEO CONFERENCING

      This technology can also be implemented in the field of wireless communications, the technology helps in virtually interacting with the objects and also helps in saving the time to travel to the place.

  5. CONCLUSION

    The technology will be implemented in daily life applications very soon, as the technology is very new to the field and the research on this technology is gaining importance lately, the way humans are interacting with the machines changes drastically. The technology makes the handling of complicated things easy and effective. Thus Haptics technology will be changing the way humans interact with the machines.

  6. REFERENCES

Ahlers, R-J., Lu, J., (1989). Stereoscopic vision – An application oriented overview, Proceedings of the SPIE – The International Society for Optical Engineering Optics, Vol. 1194. Illumination and Image Sensing for Machine Vision IV. (pp. 298-308). Bellingham, WA: SPIE. Bhatnagar, D.K. (1993). Position Trackers for Head Mounted Display Systems: A Survey. UNC Technical Report No. TR93-010, [available through anonymous ftp in ftp.cs.unc.edu:/pub/publications/techreports and by e- mailing netlib@cs.unc.edu and sending the request "get 93- 010.ps from techreports"]. Chapel Hill, NC: Dept. of Computer Science, University of North Carolina.

Bryson, S., (1993). Virtual reality hardware, In: Implementing Virtual Reality, Coursenotes #43, ACM SIGGRAPH 93, p1.3.16-1.3.24. New York, NY: ACM.

Burdea, G., Zhuang, J., (1991). Dextrous telerobotics with forcefeedback – an overview. Part 1: Human factors, Robotica v9 pp 171-178

Burdea, G., Zhuang, J., (1991). Dextrous telerobotics with forcefeedback – an overview. Part 2: Control and implementation, Robotica v9 (1991) pp 291-298

Calvert, T.W., Chapman, A.E., (1994). Analysis and synthesis of human movement, In: Handbook of pattern recognition and

image processing: Computer vision, ? (Ed), p431-

474. London, UK: Academic Press.

Dorner, B., (1994). Chasing the colour glove, MSc. Thesis,

School of Computer Science, Simon Fraser University, British Columbia, Canada

Doyle, K., (1993). Comp.robotics Frequently Asked Questions. [available through anonymous ftp at rtfm.mit.edu in pub/usenet/news.answers/robotics- faq/part1, part2 and part3] 70+ pages.

Eglowstein, H., (1990). Reach out and touch your data,

Byte (July) pp 283-290

Ferrin, Frank J. (1991). Survey of helmet tracking technologies. SPIE – The International Society for Optical Engineering – Large-Screen Projection, Avionic, and Helmet-Mounted Displays, Volume 1456 (pp. 86-94). Bellingham, WA, USA: SPIE

Leave a Reply