Smart Keypad And Indoor Navigation Assist Device For Visually Challenged

DOI : 10.17577/IJERTV2IS4850

Download Full-Text PDF Cite this Publication

Text Only Version

Smart Keypad And Indoor Navigation Assist Device For Visually Challenged

Mrs. S. Yamini priya[1], Mr. K. Karthik raj[2], Mr. P. Vaikunththivagaran[3], Mr. A. Jagadheeshwarar[4]

Sri Ramakrishna Engineering College, Vattamalai palayam, Coimbatore-641022

Abstract

Virtual technology is the one which replace the hardware component into software module and reduces the system complexity thereby increasing the systems efficiency. This project deals about the replacement of keypad of handheld communication device (mobile phone) into camera mounted specs which process the hand gesture signal into mobile number and able to make a call to desired persons. The keypad acts as an assistive technology for visually challenged to provide their inputs with audio feedback support generated by speech synthesizer. Using Lab VIEW vision acquisition and vision assistant tool it is possible for pattern identification and optical character recognition (OCR). Fingers in the hand act as 4X3 matrix which virtually act as keypad for visually challenged and thumb act as the pointer that decides the click of the number. Training with black colour cap at finger tips is necessary to operate this technology efficiently. In addition to the communication device this camera device is also used in indoor navigation assist device which is purely based on static object based image processing. This technology may act as the replacement of the Radio Frequency Identity (RFID) and ultrasonic based indoor navigation assist device used to find the current location of the user. The cost of the technology is considerably low and can serve for different purposes. Images at different predefined location is captured in which the static objects gives the location information and that is used to compare with the real time images which will guide the person with location feedback. Mostly public buildings like hospitals, railway station and airports have transmitter which transmits the building infrastructure based on the static object images and helps in indoor navigation assistance.

Index Termskeypad for visually challenged, location identification, static object based image processing

  1. Introduction

    Vision contributes major percentage in perception of sensory information from the surrounding environment. Almost 10% of people

    living today suffer from a visual disorder [5]. A person with little or no vision will face two lifelong challenges: accessing the world of information and navigating through space. In the past decade there was a rapid development in digital technologies. In order to access these technologies there is a need of input devices like keypad which is used to get the numeric and alphabetic inputs. Special keypads were designed to assist the visually challenged people like Braille keypad. Especially for mobile phone, there are many types of access to use it. Special keys with audio feedback provide the ease to work with the wearable computing devices [1]. The cost of these technologies is considerably high which can be in the range of $400 to $6000 available for eyes-free mobile texting [2].

    In addition to this, for assisting the visually challenged people in indoor navigation, many modules with wearable camera, computing device and a headset is used to serve this purpose. They help in obstacle avoidance and to know the current location with computer generated guidance. Though there is a good progress in developing a ideal indoor navigation system, it faces many problems in practical application [3][6].

  2. Related works

    Chris Harrison, Desney Tan, Dan Morris of Carnegie Mellon University developed a technology where skin is used as input surface. This technology works with identifying the hand gestures based on analyzing mechanical vibrations that arises during the finger tap. It uses a Bio- acoustic sensor which senses the occurrence of the input that is gives to the system [6].

    Brain Frey, Caleb Southern and Mario Romero from Georgia Institute of Technology designed a braille touch device which is a low cost technology that serves the blind people by making them to use the mobile phone for texting and making a call [2].

    João José, Miguel Farrajota, João M.F. Rodrigues, J.M. Hans du Buf from University of the Algarve (FCT and ISE), Faro, Portugal developed a smart vision device which is a wearable hardware technology which is used for path border recognition and obstacle detection. This device is supported with audio feedback

    which does not block the normal environmental sounds [4].

  3. Functional block diagram and description

    Figure 1 represents the Block diagram and the workflow of smart keypad and indoor navigation assist device.

    CAMERA:

    Good video resolution camera with 30 frames per second is used which is interfaced with the processor to process the acquired video.

    PRE-PROCESSING BLOCK:

    In this block the acquired images is processed for further operation. For rapid processing of the acquired image we threshold it to differentiate black colour from the surrounding environment colour which is done by colour threshold option in LabVIEW and we equalize the processed image by using lookup table.

    OPTICAL CHARACTER RECOGNITION (OCR):

    OCR is Optical Character Recognition is use to recognize the fingers which we are going to use as Smart keypad and the pointer. This will give a number output by speech synthesizer and it is also given to the GSM module to make a call to the number which the user entered through hand gesture. Pattern recognition tool is used in static objects recognition which is used to provide necessary information to the user about his/her position by computing module which provide audio feedback through speech synthesizer.

    COMPUTING MODULE:

    This module is used for positioning the user in the indoor environment. This uses computing tool for the relationship between static objects in the acquired image and assist the user to know his current location.

    SPEECH SYNTHESIZER:

    This module synthesis computer generated voice signal according to the text generated by the computing module. This gives the audio output to the speaker which is given as the feedback through headphones.

    MOBILE PHONE MODULE:

    SIM-300 processor is used for the mobile communication. This is possible by interfacing the

    GSM module with the Virtual Instrument via VISA (Virtual Instrumentation Software Architecture) completes the communication device module.

    Fig1: Block diagram of smart keypad and indoor navigation assist device

  4. Smart Keypad Description

    Camera mounted in specs captures the image which is given to the pre-processing block where the black colour is extracted as region of interest. Here the black caps which the user wears on the finger tips are used to represent the keypad virtually as shown in Figure 2. This process is carried out by colour threshold and lookup table module where the image is equalized as shown in Fig 3 and Fig 4.

    Fig 2: Original Image

    Fig 3: Colour Threshold of the acquired image

    Fig 4: Lookup table equalization of the acquired image

    Different hand gestures are recorded and compared with the OCR character set file and gives the appropriate number as output in the indicator of the virtual instrument. Figure 5 represents the position to select the desired number using thumb finger. This number is given to the speech synthesizer module which is designed in LabVIEW VI that gives the audio feedback to the user. This process continues until the user completes his desired number. ptions to call, delete and end call is set for specific hand gestures as shown in Figure 6 and Figure 7. These are used in addition with normal keypad number inputs to act as a mobile phone. These numbers are given to the SIM 300 module which is used to make call to a person. VISA is used to serve this purpose by sending the data from VI to GSM device.

    Fig 5: Smart keypad format

    Fig 6: Hand gesture for selecting CALL option

    Fig 7: Hand gesture for DELETE option

  5. Indoor Navigation Assistance Module When the user selects the option indoor navigation assistance then the image acquired is directly given to the indoor navigation assistance module where the images are compared directly with the OCR file that is created previously with the set of acquired image of the landmarks with the static objects in them focused. When the images matches with OCR trained file then the location name is given as string output which is further given to the speech synthesizer module to give the location as audio feedback.

  6. Results and Discussions

    When the user choose the Indoor navigation assistance option, the module compare the acquired image as shown in Figure 8 with the pre-stored image set and when the images are matched, corresponding string to the location is indicated

    and given to speech synthesizer as represented in Figure 9. This audio output through headphone helps the user to identify the current location while navigating in the indoor environment. While the user can also switchover to smart keypad option. As like indoor navigation assistance module, acquired images are compared with the stored character set file after image threshold and equalization. Figure 10 represents the processed image where the hand gesture pattern is identified and compared. When the character matches, the number is displayed on the indicator which is converted to audio signal and given to the user so that he/she can confirm the occurrence of the desired event. Figure 11 indicates output of the smart keypad module. When the user decides to call the number, he/she should use the particular hand gesture to call. Data from the VI is transferred to the SIM 300 module which is used to make a call and to end the call. This serve as augmentative device for visually impaired for navigation and also used to provide inputs to the electronic devices.

    Fig 8: Acquired image of indoor navigation assist module

    Fig 9: Output of Indoor navigation assist module

    Fig 10: Identification of hand gesture by smart keypad module

    Fig 11: Identified Characters and concatenated number

  7. Future Directions

Camera mounted on specs serving dual function in assisting the visually challenged will be a spark in developing a device with multi- functions which aid in independent navigation and access to digital technologies. This concept can also be further enhanced for normal people so that it can be developed as commercial product with projection of numbers on the hand when the smart keypad module is selected which serve as virtual keypad. Alphabetical inputs can also be implemented in smart keypad which consists of three consecutive alphabets in the place of a number and the alphabets can be selected by consecutively selecting over the same key as like normal mobile keypad which serves as input device for both numbers as well as alphabets. Augmented reality(AR) glasses can be used so that when the user see through the glass, virtual objects like landmarks can be projected on the glass lens which combine both the background image and virtual image to enhance the information which is used for indoor navigation with visual feedback rather than audio feedback. By including advanced computational methods directional feedback for indoor navigation can be developed which is vital for autonomous navigation module. Features like face detection, text reading and object recognition can added with this module to serve multipurpose with same cost. By implementing these features, this module can be developed as a virtual computer where AR glass acts as monitor and smart keypad act as input device.

7. References

  1. Brain Frey, Caleb Southern, Mario Romero,Braille Touch: Mobile Texting for the Visually Impaired, Georgia Institute of Technology, 2011.

  2. Chris Harrison, Desney Tan, Dan Morris, Skinput: Appropriating the Body as an Input Surface, Carnegie Mellon University, 2010.

  3. R. Ivanov, Indoor navigation system for visually impaired, in Proc. Int. Conf. on Computer Systems and Technologies, Sofia, 2010, pp. 143-149.

  4. J.M. Hans du Buf, João Barroso, João M.F. Rodrigues, Hugo Paredes, Miguel Farrajota, Hugo Fernandes, João José, Victor Teixeira, Mário Saleiro, The SmartVision Navigation Prototype for Blind Users, International Journal of Digital Content Technology and its Applications, 2011,

    Vol.5 No.5

  5. Leo M. Chalupa and John S. Werner (2004), The Visual Neurosciences, A Bradford book, The MIT press, Cambridge, London, England. ISBN 0-262- 03308-9

  6. V. Renaudin, O. Yalak, P. Tomé, and B. Merminod, Indoor navigation of emergency agents, European Journal of Navigation, vol. 5, pp. 36-45, July 2007.

  7. Starner, T.: Wearable computing and Contextual Awareness. Academic Dissertation, MIT Media Laboratory, Cambridge (1999).

Leave a Reply