A Smart Device for People with Disabilities using ARM7

DOI : 10.17577/IJERTV3IS120593

Download Full-Text PDF Cite this Publication

Text Only Version

A Smart Device for People with Disabilities using ARM7

Pande Divyaprasad Digambarrao

M.Tech Embedded System Student

VBIT, Aushapur (V), Ghatkesar Mandal, R.R. Dist., Hyderabad, Telangana, India.

M. Praveen Kumar

Assistant Professor

VBIT, Aushapur (V), Ghatkesar Mandal, R.R. Dist., Hyderabad, Telangana, India.

Abstract To express our thoughts, we communicate with different people in various ways easily. But, it is difficult for Low/High (L/H) syndrome people, blind people and paralyzed or physically handicapped (PH) people to convey what they think. The blind people face difficulties in writing exams while L/H syndrome or paralyzed or PH people feel extremely difficult to express their ideas. This project device can provide solution over these problems. For solving this problem, an ARM7 LPC2148 Microcontroller controls the functioning of speech-to- text module, keypads or buttons, flex sensors and text-to-speech module. With the help of these components, all of the above problems can be removed in single smart device. Using this smart device, blind people can write his/her exam without interpreter using speech-to-text module. By pressing keypad, L/H syndrome people can express their basic thoughts using text-to-speech module. And with the help of flex sensors, paralyzed or PH people can convey their message.

Keywords ARM7 LPC2148 Microcontroller, flex sensors, speech-to-text module, text-to-speech module, keypad or buttons.

INTRODUCTION

In day-to-day life, we are expressing our thoughts in various easily. But, the same thing is really hard for persons those who are suffering from L/H syndrome or blind persons or PH or paralyzed. We can express our thoughts very effectively and easily, but for these people this is not easy task. They have to struggle for doing their own work. So, to reduce their efforts, this project is very helpful. Using this project, the various persons with disabilities like blindness, L/H syndrome, paralyzed or PH can do their day-to-day regular tasks very easily.

When the blind people need to go school/college and learn, it may cause some unique problems for their university/college life as well. There is a need for a Human Interpreter to communicate a message for a blind child to do his/her schooling. For such people, this project acts as an interpreter. By using this project, the blind people can write their examination without any help from an interpreter.

If the L/H syndrome people try to express something, we do not have enough patience to listen them. They use some sign languages to express their ideas. By using this project, they can express their basic thoughts by just pressing the keypad, which is already stored in audio form for that appropriate word.

People those who are paralyzed or PH face many barriers when trying to any physical activity. They use aids such as wheelchair, guide, support dogs and white cane. They feel

highly difficult with these communication aids also. By using this project, these people can express their basic needs just by wearing gloves or placing flex sensors on their fingers.

  1. SYSTEM HISTORY

    For blind people, some technologies are available. Saaid et al. (2009) developed a prototype RFIW for blind people, acted as a guide for walking in a correct path [15]. Angin et al. (2010) proposed an open and extensible architecture for blind using Web for location-specific information[3]. Bourbakis et al. (2008) proposed a prototype for multimodal device and mobile that assists in navigation and reading by using the Stochastic Petri-Nets [5]. Oliveira et al. (2012) employed a sensory replacement and haptic fingertip reading in inclusive classrooms [13]. Brilhault et al. (2011) designed an assistive device based on fusion of GPS, adapted GIS and vision based positioning [6]. Zhang et all. (2008) implemented a University Braille translation [20]. Ab Aziz and Hendr (2012) discussed about the machine translation [1]. A solution for dumb people was given by Rajeswari et al. (2010) an AG-500 Articulograph sensor and an equivalent SAMPA code proposed for PHONWEB [14]. Begum and Hasanuzzaman (2009) implemented Bangladeshi sign language [4].

    Krishnaswamy and Srikkanth (2010) sensed the vibrations produced in the vocal chords and encoded it and then compared with a reference threshold to produce sound [9]. Almeshrky and Ab Aziz (2012) proposed a transfer based approach to translate a dialogue between Arabic and Malay language [2]. Hamid and Ramli (2012) proposed a method to improve audio based system [8]. For people who are affected by paralysis some technologies are available. Shih et al. (2011) proposed a commercial numerical keyboard acted as the input device [17]. Nazemi et al. (2011) used a Digital Talking Book player implemented for the visually impaired to read, navigate, search and bookmark written material [12]. Seki and Kiso (2011) proposed an adaptive driving control scheme based on fuzzy algorithm for electric wheelchairs [16].

    Xu et al. (2012) proposed a solution to monitor the movement of the accessory of elderly in the smart home [18]. Melek et al. (2005a) gave a solution for people who are affected by diaphragm paralysis [10]. Melek et al. (2005b) proposed a solution for people who are affected by facial paralysis due to Lyme disease [11]. Zahmatkash and Vafaeenasab (2011) discussed about the people who are

    affected by knee osteoarthritis [19]. Dursun (2008) proposed an intelligent speed controller based on fuzzy logic to increase comfort of disabled people [7]. These are the some existing systems about the assistive device for the people with disabilities.

  2. BLOCK DIAGRAM

    Speech to

    moved that particular basic need is displayed on display and later its output is given from speakers.

  3. PROJECT DEVICE SYSTEM

    The project device system consist of following major components such as:

    1. Speechto-Text Module

    2. ARM7 LPC2148 Microcontroller

      Voice

      text Module (EasyVR)

      ARM7

      Text to Speech Module (Emic 2)

      Speaker

      Voice

      Display

      Flex Sensors

      Buttons

    3. Text-to-Speech Module

    4. Flex Sensors

    Each of these components are having different feature required for this project device. Below one by one their features are mentioned in detail.

    1. Speechto-Text Module

      It is a multi-purpose speech recognition module designed to easily add versatile, robust and cost effective speech recognition capabilities to virtually any application. It is named as EasyVR module that can be used with any host with an UART interface powered at 3.3V-5V, such as PIC and Arduino boards. Some application examples include home automation, such as voice controlled light switches, locks or beds, or adding ¨hearing¨ to the most popular robots on the market [21].

      1. EasyVR Features:

        1. A host of built-in Speaker Independent (SI) commands for ready to run basic controls, in the following languages:

          1. English (US)

          2. Italian

          3. German

          4. French

          5. Spanish

          6. Japanese

        2. Supports up to 32 user-defined Speaker Dependent (SD) commands or triggers as

          Fig. 1. Block diagram of project device

          The block diagram mainly consists of three parts for three different kinds of disabilities such as for blind people, for L/H syndrome people, and paralyzed or PH people.

          1. In Fig. 1, the voice of blind people is converted to text using speech-to-text module. This voice i then processed in ARM7. Processed voice is displayed on 16*2 LCD display, which can store this data.

          2. The L/H syndrome people can access the keypad provided to them and can express their daily basic needs. For this purpose, here, each button on keypad is assigned separate basic needs of these people. After pressing any button on keypad, the assigned basic need is displayed on the display and it is processed in ARM7.

            Now this displayed basic need is given to text-to- speech module where this text is converted to voice and the output is obtained through speakers.

          3. Now, the third and last part of block diagram is use of flex sensors for paralyzed or PH people. Here, flex sensors can directly placed on his/her fingers or it can be mounted on gloves. Each flex sensor has assigned with the help of particular basic need. After movement of finger depending on which finger is

          well as Voice Passwords. SD custom commands can be spoken in ANY language.

        3. Easy-to-use and simple Graphical User Interface to program Voice Commands and audio.

        4. Module can be used with any host with an UART interface.

        5. 3 GPIO lines (IO1, IO2, IO3) that can be controlled by new protocol commands.

        6. PWM audio output that supports 8-ohm speakers.

        7. Sound playback feature [21].

          Fig. 2. Speech-to-Text Module (EasyVR) []

          f) Single row, 6-pin, 0-1¨header for easy connection to a host system. [23]

      2. Application Ideas:

        1. Reading Internet-based data streams (such as e- mails or Twitter feed).

        2. Conveying status or sensor result from robots, scientific equipment, or industrial machinery.

        3. Language learning or speech aids for educational environments. [23]

    2. ARM7 LPC2148 Microcontroller

      The LPC2148 microcontroller is based on a 16-bit/32- bit ARM7TDMI-S CPU with real-time emulation and embedded trace support, that combine the microcontroller with embedded high-speed flash memory ranging from 32 kB to 512 kB. For critical code size applications, the alternative 16-bit Thumb mode reduces code by more than 30% with minimal performance penalty.

      Due to their tiny size and low power consumption, LPC2148 are ideal for applications where miniaturization is a key requirement, such as access control and point-of-sale. Serial Communication interfaces ranging from a USB 2.0 Full-speed device, multiple UARTs, SPI, SSP to I2C-bus and on-chip SRAM of 8 kB up to 40 kB, make these devices very well suited for soft modems, voice recognition and low end imaging, providing both large buffer size and high processing power.

      Various 32-bit timers, single or dual 10-bit ADC(s), 10- bit DAC, PWM channels and 45 fast GPIO lines with up to nine edge or level sensitive external interrupt pins make this microcontrollers suitable for industrial control and medical systems [22].

    3. Text-to-Speech Module

      The Text-to-Speech Module named Emic 2 is an unconstrained, multi-language voice synthesizer that converts a stream of digital text into natural sounding speech output. Using the universally recognized DECtalk text-to-speech synthesizer engine, Emic 2 provides full speech synthesis capabilities for any embedded system via a simple command- based interface. [23]

      1. Features are as follows:

        1. High-quality speech synthesis for English and Spanish languages

        2. Nine pre-defined voice styles comprising male, female, and child

        3. Dynamic control of speech and voice characteristics, including pitch, speaking rate, and word emphasis

        4. Industry-standard DECtalk text-to-speech synthesizer engine (5-0-E1)

        5. On-board audio power amplifier and 1/8¨(3.5 mm) audio jack

          Fig. 3. Emic 2 Text-to-Speech Module

    4. Flex Sensors

      As shown in Fig. 4, Flex sensors are used to sense the finger movement of the paralyzed or PH people. The flex sensors are based on resistive carbon element technology. They are an analog resistor which works as variable voltage drivers. The carbon resistive element is placed inside the flex sensors as a thin flexible substrate. Depending upon the amount of bend in the sensors, it can change its electrical resistance value. When there are more bends then the resistance value is more. The flex sensors are in theform of a thin strip from 1-5 inches long. It can be undirected or bi- directed. They vary in their values as 1-20K ohms; 50-200K ohms.

      Fig. 4. Flex Sensors mounted on Gloves

      As shown in Fig. 5, when there bend in sensor substrate then it produces a resistance output relative to the bent radius. A flex of 0° produces 10K ohm resistance and with a flex of 90° produces 30-40K ohms.

      FLAT (NOMINAL RESISTANCE)

      45 BEND (INCREASED RESISTANCE)

      90 BEND (RESISTANCE INCREASED

      Fig. 5. Flex Sensor-variable resistance readings

      1. Features of Flex Sensors:

        1. Angle Displacement Measurement

        2. Bends and Flexes physically with motion device

        3. Possible Uses

          1. Robotics

          2. Gaming (Virtual Motion)

          3. Medical Devices

          4. Musical Instruments

          5. Physical Therapy

        4. Simple Construction

        5. Low Profile

        6. Life Cycle: >1 Million

        7. Temperature Range: -35°C to +80°C

        8. Flat Resistance: 25K Ohms

    Due to the source impedance of the flex sensors as voltage divider, the low bias current of the op-amp reduces error as shown in Fig. 6. This takes place because of the sensor, which is used as an impedance buffer with a single sided operational amplifier. [24]

    Fig. 6. Basic Flex Sensor Circuit Diagram

  4. SYSTEM FLOWCHART

    The system flowchart represents the working process of project, which is given below step by step:

    1. Switch on Device or start the device.

    2. Choose one option from the following: a) Blind People b) L/H Syndrome People c) Paralyzed or PH People.

    3. If blind people option is selected then start your speech.

    4. Speech is displayed on LCD display.

    5. Stop or end.

    6. If L/H Syndrome people option is selected then press any button from keypad as per requirement.

    7. Function of particular button is delivered through speaker.

    8. Stop/end or Move to step 6.

    9. If paralyzed or PH people option is selected then start to move finger as per requirement.

    10. Function of particular finger is displayed and sound is delivered via speaker.

    11. Stop/end or Move to step 9.

    12. If one of the step from 5 or 8 or 11 is selected then switch off device of stop device.

  5. CONCLUSION

This device is specially made for the people those who are blind, suffering from L/H Syndrome problem and paralyzed or PH people. With the help of this device these people can

do their own work or can convey their thoughts to relatives easily and softly. It is a highly stable, reliable and efficient device, which is cost effective and robust in nature. Thats why we named it as Ä SMART DEVICE FOR PEOPLE WITH DISABILITIES USING ARM7¨.

ACKNOWLEDGMENT

I would like to express my gratitude to my project guide, Assistant Prof. Praveen Kumar, for his insight and broad range of knowledge that he willingly shared with me during the research and writing of my project. He was there whenever I got stuck with any problem.

I express my appreciation and indebtedness to my Head of the department Prof. Dr. B. Brahmi Reddy and to the members of the Department, who helped me in many ways during my project work.

I am thankful to my family members and friends for their love, support.

REFERENCES

  1. Ab Aziz, M.J. and A.M. Hendr, 2012. Translation of classical Arabic language to English. J. Aplied Sci. 12:781-786.

  2. Almeshry, H.A. and M.J. Ab Aziz, 2012. Arabic Malay machine translation for a dialogue system. J. Appl. Sci., 12:1371-1377.

  3. Angin, P., B. Bhargava and S. Helal, 2010. A mobile-cloud collaborative traffic lights detector for blind navigation. Prof¡ceedings of the International Conference on Mobile Data Management, May 23- 26, 2010, Kansas City, MO, USA, PP: 396-401.

  4. Begum, S. and M. Hasanuzzaman, 2009. Computer Vision-based Bangladeshi sign language recongintion system. Proceedings of the International Conference on Computer and Information Technology, December 21-23, 2009, Dhaka, pp: 414-419.

  5. Bourbakis, N., R. Keefer, D. Dakopoulos and A. Esposito, 2008. A multimodal interaction scheme between a blind user and the tyflos assistive prototype. Proceedings of the IEE International Conference on Tools with Artificial Intelligence, Volume 2, November 3-5, 2008, Dayton, OH, pp: 487-494.

  6. Brilhault, A., S. Kammoun, O. Gutierres, P. Truillet and C. Jouffrais, 2011. Fusion of artificial vision and GPS to improve blind pedestrian positioning. Proceedings of International Conference on New Technologies, Mobility and Security (NTMS), February 7-10, 2011, Paris, France, pp: 1-5.

  7. Dursun, M., 2008. A wheelchair driven with fuzzy logic cntrolled switched reluctance motor supplied by PV arrays. J. Applied Sci., 8:3351-3360.

  8. Hamid, L.A and D.A. Ramli, 2012. Performance of qualitative fusion schem for multi-biometric speaker verification system in noisy condition. J. Applied Sci., 12:1282-1289.

  9. Krishnaswamy, C. and C. Srikkanth, 2010. Dynamic Acoustic for Dumb using Embedded Interface (DADEI). Proceedings of International Conference on Computer Modeling and Simulation, Volume 3, January 22-24, 2010, Sanya, Halnan, pp: 525-527.

  10. Melek, I.M., D. Duman, C. Babayigit and E. Gali, 2005a. Co-existence of sickle cell disease and hemidiaphragm paralysis. J. Med. Sci., 5: 61- 63.

  11. Melek, I.M., T. Duman and T. Eraslan, 2005b. Bilateral isolated facial paralysis due to Lyme disease. J. Biological Sci., 5: 532-535.

  12. Nazemi, A., C. Ortega-Sanchez and I. Murray, 2011. Digital talking book player for the visually impaired using FPGAs. Proceedings of the International Conference on Reconfigurable Computing and FPGAs, November 30-December 2, 2011, Cancun, pp: 493-496.

  13. Oliveira, F.C.M.B., F. Quek, H. Cowan and B. Fang, 2012. The Haptic Deictic System-HDS: Bringing blind students to manstream classrooms. IEEE Trans. Haptics, 5: 172-183.

  14. Rajeswari, K., E. Jeevitha and V.K.G.K. Selvi, 2010. Virtual voice-the voice for the dumb. Proceedings of the International Conference on

    Computational Intelligence and Computing Research, December 28-29, 2010, Coimbatore, pp: 1-3.

  15. Saaid, M.F., I. Ismail and M.Z.H. Noor, 2009. Radio Frequency Identificaton Walking Stick (RFIWS): A device for the blind. Proceedings of the International Colloquium on Signal Processing and Its Applications (CSPA), March 6-8, 2009, Kuala Lumpur, pp: 250- 253.

  16. Seki, H. and A. Kiso, 2011. Disturbance road adaptive driving control of power-assisted whellchair using fuzzy linference. Proceedings of the 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, August 30-September 3, 2011, Boston, MA, pp: 1594-1599.

  17. Shih, C.T., C.H Shih and N.R. Guo, 2011. Development of a computer assistive input device using screen-partitioning and mousekeys methos for people with disabilities. Proceedings of the International Conference on Computer Science and Service System (CSSS), June 27- 29, 2011, Nanjing, pp: 2201-2204.

  18. Xu, B., Y. Ge, J. Chen Z.Chen and Y. Ling, 2012. Elderly personal safety monitoring in smart home based on host space and travelling pattern indentification. Inform. Technol. J., 11: 1063-1069.

  19. Zahmatkash, M. and M.R. Vafaeenasab, 2011. Comparing analgesic effects of a topical herbal mixed medicine with salicylate in patients with knee osteoarthritis. Pak. J. Biol. Sci., 14: 715-719.

  20. Zhang, X., C. Ortga-Sanchez and I. Murray, 2008. Reconfigurable PDA for the visually impaired using FPGAs. Proceedings of the International Conference on Reconfigurable Computing and FPGAs, December 3-5, Cancun, pp: 1-6.

  21. www.veear.eu

  22. www.nxp.com, Document identifier: LPC2141_42_44_46_48

  23. www.parallax.com Product Name: Emic 2 Text to Speech Module (#30016)

  24. www.spectasymbol.com Flex sensor (FS) special edition length.

Leave a Reply