Communication Aiding System for People with Speech Impairment

DOI : 10.17577/IJERTV3IS100630

Download Full-Text PDF Cite this Publication

Text Only Version

Communication Aiding System for People with Speech Impairment

Simi Oommen

    1. ech – VLSI and Embedded Systems Indira Gandhi College of Engineering, Nellikuzhi, Kothamangalam

      Kerala, India

      Greeshma Liz Jose

      Asst Prof. Dept of ECE

      Indira Gandhi College of Engineering, Nellikuzhi, Kothamangalm

      Kerala, India

      AbstractSpeech impairment is due to severe physical disabilities such as motor neuron diseases or neurological conditions. This paper is mainly aimed at developing an advanced communication aid for the people having severe speech impairment. This method gives rise to a new form of augmentative and alternative communication device (AAC) which recognizes the disordered speech of the user and builds messages which are converted into synthetic speech. System development is controlled by Speech Recognition system, is the process of automatically recognizing a certain word spoken by a particular speaker based on individual information included in speech waves. This new system retain the speed of the communication as far as possible and also retain the naturalness of the communication. Also the system used for controlling the different home appliances by voice.

      Index Terms Speech impairment, Augmentative and Alternative Communication, Synthetic Speech, Speech Recognition System.

      1. INTRODUCTION

        Speech is a primary mode of communication among humans. Speech impairment is a main problem that affect the quality of life [1]. Such speech impairment occurs due to severe physical disabilities as a result of neurological conditions such as a cerebral plasy or acquired neurological conditions occurs due to traumatic brain injury or stroke. Impaired speech is unintelligible to unfamiliar partners. So such people use augmentative and alternative communication aids (AAC) [1] for improving their spoken communication. People with severe speech and language impairment uses AAC. AAC can be a permanent addition to a person's communication or a temporary aid. Such communication aids help people by providing technology to help them to interact with others. Commercially available communication aids can work well with mild and even moderate speech impairment, they actually do not work with severe speech impairment. Such type of aids depends upon the environmental factors and reduces the accuracy of the recognition. Also the current technology communication aids relay on a switch or keyboards for providing the inputs, so such aids are relatively slow and disrupt the eye contact[3] and [4]. So the communication aid users need a device that has easy operation and retain the speed as well as the naturalness of the communication [5].Another advantage of this proposed system is by using the same system the user can control the home appliances by with the impaired voice. So the system is very useful for people with disabilities.

      2. LITERATURE REVIEW Commercially so many communication aid systems are

        available but such type of system are not a viable access solution for severe speech impairment. It works good only with the mild and moderate speech impairment. Such systems has an inverse relation with the accuracy of speech recognition[6]. Also such systems are depending on the environmental conditions, so that they are likely to be encountered in realistic usage. Because that degrades the accuracy of recognition.Thus while automatic speech recognition has been used for many years as a method of access to technology by some people with disabilities like unimpaired speech. But this has been experienced with certain disadvantages as they cannot be a viable access solution. This method is not suitable for environmental conditions and hence degrades the recognition accuracy.

        The augmentative and alternative communication aids are of two types unaided AAC and aided AAC. In the unaided communication does not use any devices for communication, just use symbols or signs for conveying the messages. In aided communication uses any device, either non electronic or electronic devices for communication. Such aided devices are divided into Low tech and high-tech communication aids. In the low tech communication do not use any electronics or batteries just use communication boards or charts or books.

      3. PROPOSED DESIGN

        This system is a new communication aid device that is well suitable for mild as well as severe speech impairment. The main function of the system is recognizing the disordered speech of the user and build the messages then it is converted into intelligible speech. The designing of the system is based on user centred design. This is a new technology for building small vocabulary and the system is based on a speaker dependent recognition process that only need few amount of data for training. The performance of the system is very good. The recognition perplexity will increase on highly disordered speech also. The output of the system is an intelligible speech so the system is very suitable for severe speech impairment. This system is a exact solution for the trained words and the system is independent of environmental conditions.

        Fig .1.Schemmatic diagram of the system

        This is a new communication device for severe speech impairment, recognizes the disordered speech of the user and builds messages, Then the impaired speech is converted into intellectual speech. The development of the system is a user centred design. The system consists of a microphone, is used for providing the impaired voice and the speech recognizer processed and recognized the input voice. Then the words are passed to the next message building module. This message building module update the screen based on the input, then a audio feedback is given to the user for determining the possible future inputs. Then the completed messages is passed to the speech synthesizer and producing the synthetic spoken output. By using many speakers ASR and HMM of speech unit models are trained where data are recorded. This system designing introduces a new technology for developing the small vocabulary, and the system is speaker dependent, means that the system is trained by the individual who will be using the system in the future. Also the system needs only few amounts of training data. From the user the initial recordings were collected using either a headset or microphone.8Khz is used for sampling the signals.

        Fig . 2. Block Diagram of the voice input voice output system

      4. SYSTEM WORKING

        The working of the system is as follows. The step by step operation of the system is as given follows.

        1. Speech Recognition

          Speech recognition process is based on Automatic Speech Recognition (ASR), in that process statistical model of speech unit is the used. Speech unit will be at the level of individual speech phones, sound. In the case of large vocabulary system. Automatic Speech Recognition has few disadvantages for people with speech impairment because the amount of data available for training such people are limited. Also the training section needed great efforts, the material for training is highly varying. So in this system a new speaker dependent recognizer with small vocabulary and few data needed for training process. The training process is done by using the HMM toolkit with the Baum – Welch algorithm. Training process is an alternative process. In this process the user repeatedly speak words and forms the vocabulary and each vocal sound is recorded, then giving feedback to the user on more accurate words.

        2. Message Building Module

          Construction of messages that the user needs to communicate i the function of the message building module. According to the recognized words, this module builds the messages. The ideal and the simplest form of message building, each recognized words are individually recognize and speak out the same word in a synthesised form. The system design is a user centred design, so the designing process is based on the needs of the user. Message construction is based on the priorities of the users communication. Message building module is used for generating frequently used phrases rapidly for interacting in the phone, interacting with unknown persons during the emergency situations etc. For example for generating the phrase I want a cup of tea the sequences of words like want cup tea could generate the phrase. By using this method reduces the perplexity of recognition.

        3. Speech Synthesis

          After the process of speech recognition a computationally demanding process is speech synthesis. From the users, the Personal Digital Assistant (PDA) takes the voice input through a microphone. But for practical situations the PDAs internal speaker only produce speech at a low volume, so it is not suitable for practical situations. Therefore for the spoken output a separate amplifier and speaker is needed. PDAs CPU also does not support rapid computation of numerics and also it does not support calculation of floating point operations because it does not have hardware for that.

          This proposed system is a voice input voice output communication aid which is controlled by a automatic speech recognition system. Simple or complex messages can be produced by using this device .By using the same system it is possible to generate small set of words. The main advantage of this proposed system is producing an intellectual speech output. Also the system is suitable for people with severe speech impairment than compared to all other system. The message building block constructs the messages based upon

          the needs of the user from the already trained words in the vocabulary. Each word unit is recognized individually and output the correct word in a clear voice is the main function of an ideal message building module. When the size of the vocabulary increases then the accuracy of speech decreases [9].

        4. Controlling Home Appliances

        Another important function of our proposed system is controlling the home appliances by using the voice, this is very useful for physically challenged personalities for their daily life without depending other indivuals. This system includes a module for controlling the home appliances. The block diagram for controlling home appliances is shown in the figure. 3.

      5. HARDWARE IMPLEMENTATION

        This section gives the details of various hardware used for the implementation of this system.

        1. HM2007

          The main module used for the purpose of voice recognition is a HM 2007 voice recognition IC. The speech recognition circuit is a easy programmable and completely assembled circuit. By using this system it is possible to train or program the system with the vocal utterances or voices of the user for speech recognition process. It also supported so many applications like controlling home appliances, robotics movements, Speech to text translation, Speech Assisted technologies, and many more.

          Fig .3.Block diagram of the system for controlling home appliances

          The main module of this proposed system is a speech recognition system HM 2007.Hm 2007 IC is the heart of the system .About 40 words of 1.92 second word length can be recognized by the system. The HM 2007 speech recognition system consists of a microphone for providing the voice inputs and also a keypad for proving the number corresponding to each word. The keypad contain 12 contact switches. Also the system contain a LCD display for displaying the words.

          Fig 4.HM 2007 Kit

          The proposed system consists of other hardware modules like APR 9600 voice IC in that the intellectual voice corresponding to the input is stored for the communication. The main advantage of this module is it has the ability to reproduce the voice in the natural form. For controlling the home appliances, relay module is used, that is a automatic controlled switches.

          Fig.5 simulation Result of the system

      6. SIMULATION AND EXPERIMENTAL

RESULTS

Simulation of the system is done in proteus8.1 version. Proteus is a software for microprocessor simulation, schematic capture and printed circuit board design. In this the software is written in embedded C for simulating the result which is done in MP LAB. Figure 5 shows the simulation results of the system.

VI.CONCLUSION

About 1.3% of the common people use communication aid for their communication, this is due to their speech impairment occurs as a result of certain diseases like motor neuron diseases, stroke, cerebral palsy, traumatic brain injury etc. Currently so many communication aids are available but such devices has disadvantages like depending upon switches or keyboard for providing the input. Such devices are relatively slow performance and disrupt the eye contact between the communication partners. So here developed a advanced communication aid, which is portable and also has easy operation. Also by using the system the user with physical disabilities can control the home appliances by voice.

REFERENCES

  1. D. Beukelman and P. Mirenda, Augmentative and Alternative Communication, 3rd ed. Baltimore, MD: Paul H. Brookes, 2005.

  2. P. Enderby and L. Emerson, Does Speech and Language Therapy Work? London, U.K.: Singular, 1995.

  3. J. Murphy, I prefer contact this close: Perceptions of AAC by people with motor neurone disease and their communication partners, Augmentative Alternative Commun., vol. 20, pp. 259 271, 2004.

  4. C. L. Kleinke, Gaze and eye contact: A research review, Psychol. Bull., vol. 100, no. 1, pp. 78100, 1986.

  5. B. OKeefe, N. Kozak, and R. Schuller, Research priorities in augmentative and alternative communication as identified by people who use AAC and their facilitators, Augmentative Alternative Commun., vol. 23, no. 1, pp. 8996, 2007.

  6. J. Todman, N. Alm, J. Higginbotham, and P. File, Whole utterance approaches in AAC, Augmentative Alternative Commun., vol. 24, no. 3, pp. 235254, 2008.

  7. L. J. Ferrier, H. C. Shane, H. F. Ballard, T. Carpenter, and A. Benoit, Dysarthric speakers intelligibility and speech characteristics in relation to computer speech recognition, Augmentative Alternative Commun., vol. 11, no. 3, pp. 165175, 1995.

  8. N. Thomas-Stonell,A.L.Kotler, H.A. Leeper, and P. C. Doyle, Computerized speech recognition: Influence of intelligibility an recognition accuracy, Augmentative Alternative Commun., vol. 14, no. 1, pp. 5156, 1998.

  9. R. N. Bloor, K. Barrett, and C. Geldard,The clinical application of microcomputers in the treatment of patients with severe speech dysfunction, IEE Colloquium High-Tech Help Handicapped, pp. 9/19/2.

Leave a Reply