Automatic Writing Machine using Brain Sensor

DOI : 10.17577/IJERTCONV4IS19016

Download Full-Text PDF Cite this Publication

Text Only Version

Automatic Writing Machine using Brain Sensor

N. Harini, S. Catherina Dolly, R. Parvadhavardhini

Student-Department of Electronics and Communication Engineering, Anand Institute of Higher Technology,

Anna University, Chennai, India

Abstract:- A new concept of the pen is coming now that is automatic pen writer with brain sensor. Automatic writing is an ability allowing paralysed persons to produce written words without consciously writing. The words are claimed to arise from a sub-conscious, spiritual, or supernatural source. Automatic writing machine is a mechanical hand which is used to write characters as well as word. The pen is used to write whatever paralysed people wish to reveal to others with the use of brain sensors. The pen writer concept features all the traditional elements like automatic writing machine, pen, hard disk, voice sensor, brain sensor, battery etc., in an innovative manner.The automatic pen writer with sensor is designed for three main purposes. First to sense the things that comes to our mind using brain sensors, second is to convert the mind sense to voice matter. Hence the voice is thereby converted to text using automatic writing pen.

Keywords:- Brain sensor, voice sensor, plates, auto writing machine.

I.INTRODUCTION

Basically, some of the physically challenged people who are able to think but unable to write due their inability. In order to overcome this difficulty the auto writing machine is designed to sense their thinking using a brain sensor and thereby converted to voice by signal using the transducer. This voice signal will be set as input to auto writing machine which has the ability to access the voice and process it. GAKKEN a Japanese company which was started in the year 1946,developed the large mechanical hand. The GAKKEN auto writing machine consist of a hand when you stick a pen to its holder will write the characters. A research is to use an auto pen for writing in easiest way. The auto writer works by having a hard disk for storing a large amount of data and three plates that rotate and caught by two sliders that then pull the spring loaded hand to draw the desire shape. The main advantage

of this research is to sense our brain signals of the user followed by conversion to voice which thereby is easily turned to text and writing it on the paperanother main advantage is that it stores the data in plates or hard disk.

Fig.1.1 AutomaticPenWriterMachine [3]

Fig 1 .2 Automatic Advanced Pen Writer [3]

  1. EXISTING SYSTEM

    This system explains about the conversion of voice to text manner. The most common approaches to voice recognition can be divided into two classes.

    • Templates matching

    • Feature matching

      Templates matching is the simplest technique and has the highest accuracy when used properly. As with any approach to voice recognition, the first step is for the user to speak a word or phase into microphone. The electrical signal comes from the microphone is digitized by an analog to digital (A/D) convertor and is stored in memory. To determine the meaning of the voice input, the computer attempts to match the input with a document that has a known meaning. This technique is a close analogy to traditional command inputs from a keyboard. The program contains the input template and attempts to match this template with the actual input using a simple conditional statement.

      Speaker Dependent system

      The system has a inbuilt program which displays the printed word or phrase several times into a microphone during the training session. The program thereby compute statistical average of multiple samples of the same word and stores average samples as a template in a program data structure. With this approach to voice recognition the program has a vocabulary that is limited to words or phrases used in the training session and its user base is limited to these users who have trained the program. Hence this system is called speaker dependent system. It can have vocabularies on the order of a few hundred words and short phrases and recognition accuracy can be about 98 percent with the expected output.[3]

      Fig.2.Voice Sensor[2]

  2. PROPOSED SYSTEM

    Brain is constantly generating electrical signal when you are thinking, sleeping or even relaxing. The signals can be detected from outside your head via sensors through electrodes. Hencebroadband gamma-wave signals for every electrode is recorded using ECoG. Stacked broadband gamma features are calculated by signal processing.The processed signals are decoded into text using Viterbi algorithm.The decoded text is converted into voiceusing text to speech method.this voice signal is sent as input for auto writing machine .The given output samples are checked for existence In the harddisk or plates .If not existed new document are created and hence auto writing machine starts the process of writing in paper.

    Fig.3.Electrocorticography[3]

  3. HARDWARE REQUIREMENTS

    Electrocorticography

    It is the type of electro physiological monitoring that uses electrodes placed directly on the exposed surface of the brain to record electrical activities.

    Processor

    Processor is the logic circuitory that responds to and process the basic instructions that drive a computer. The term processor is generally replaced as a central processing unit.

    Decoder

    Decoder is a circuit that changes the code into a set of signals. It is called the decoder because it does the reverse of encoding .A common type of the decoder is the line decoder which takes an digit binarynumber and decodes the signal.It also decodes using the Viterbi algorithm.

    Auto writing machine

    It consists of mechanical hand, plates or harddisk, voicesensor. Voice sensor accepts the voice signal and Thereby produces the required outputs by using the auto writing machine.

    Fig.4.Components OfAutowriting Machine[4]

  4. BLOCK DIAGRAM

Brain Sensor

Brain activities are recorded by electrocorticography electrodes (blue circles). Spoken words are then decoded from neural activity patterns in the blue/yellow areas.Their Brain-to-Text system recorded signals from an electrocorticographic (ECoG) electrode array located on relevant surfaces of the frontal and temporal lobes of the cerebral cortex of seven epileptic patients, who participated voluntarily in the study during their clinical treatment.The Brain-to-Text system might lead to a speech- communication method for locked-in patients in the future.

Overview of Brain-to-Text System

Phone (phoneme sounds) likelihoods over time are calculated by evaluating all Gaussian ECoG phone models for every segment of ECoG features. Using ECoG phone models, a dictionary and an n-gram language model, phrases are decoded using the Viterbi algorithm. The most likely word sequence and corresponding phone sequence are calculated and the phone likelihoods over time can be displayed. Red marked areas in the phone likelihoods show most likely phone path.

The signal processing and automatic speech recognition methods were developed at the Cognitive Systems Lab of Karlsruhe Institute of Technology (KIT) in Germany.ECoG technology uses a dense array of needles to record signals directly from neurons at high spatial resolution, high temporal (time) resolution, and high signal-to-noise ratio.

Brain-to-text

It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasibe to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can

achieve word error rates as low as 25% and phone error rates below 50%.

VI.IMPLEMENTATION

The implementation of automatic pen writer are discussed below:

    • The hard disc or plates stores the required documents.

    • The speech dependent system is used for sensing a voice.

    • The sensors used recognises the user and fetch user input with stored documents and returns result and start writing on paper.

    • The sensor is more efficient than speech independent system. Speaker-independent speech recognition has been proven to be very difficult,because pattern matching would fail to handle, include accents and varying speed of delivery,pitch,volume and inflection

    • One more use of this invention if the user wants a fresh document which doesnt exist in the hard disk or plates then automatic pen allows this by sensing our mind signals and then write

    • It stores the new document in the hard disk for later use.

      Hence in this way the automatic pen writer with sensor works to automate writing system for physically challenged people unable to write. Hence it makes a new and better way of communication for them.

      ADVANTAGES OF AUTOMATIC PEN WRITER

    • Helps paralysed and physically challenged people to write easily

    • Stores large amount of data

    • It is portable

    • Useful for wriring documents

    • These uses mind recognition for fetching required signals to write

      DISADVANTAGES OF AUTOMATIC PEN

    • It stores only dcuments and not any other type of data such as image type

    • It needs a battery

CONCLUSION

The system is aimed to create a machine that writes ourthoughts. We are hoping forward as much as we can to satisfy the needs of physically challenged people who struggle with the inability to speak or communicate.

REFERENCES

  1. http://www.japantrendshop.com/

  2. http://hitl.washington.edu/research/knowlegebase/virtual- words/EVE/I.D.2.d.Voicerecognition.html

  3. http://www.adafruit.com/products/2032

  4. https://threestepsoverjapan.wordpress.com/2014/07/28/kit- 41-the-auto-writer

  5. E. F. Chang, J. W. Rieger, K. Johnson, M. S. Berger, N. M. Barbaro, and R. T. Knight, Categorical speech representation inhuman superior temporal gyrus, Nature neuroscience, vol. 13,no. 11, pp. 14281432, 2010.

  6. N. Mesgarani, C. Cheung, K. Johnson, and E. F. Chang, Phonetic feature encoding in human superior temporal gyrus, Science, p.1245994, 2014.

  7. N. Crone, L. Hao, J. Hart, D. Boatman, R. Lesser, R. Irizarry, and B. Gordon, Electrocorticographic gamma activity during wordproduction in spoken and sign language, Neurology, vol. 57,no. 11, pp. 20452053,2001.

  8. N. E. Crone, D. Boatman, B. Gordon, and L. Hao, Inducedelectrocorticographic gamma activity during auditory perception,Clinical Neurophysiology, vol. 112, no. 4, pp. 565582, 2001.

  9. E. C. Leuthardt, C. Gaona, M. Sharma, N. Szrama, J. Roland,Z. Freudenberg, J. Solis, J. Breshears, and G. Schalk, Using theelectrocorticographic speech network to control a braincomputerinterface in humans, Journal of neural engineering, vol. 8, no. 3,p. 036004, 2011.

  10. F. H. Guenther, J. S. Brumberg, E. J. Wright, A. Nieto- Castanon, J. A. Tourville, M. Panko, R. Law, S. A. Siebert,

    J. L. Bartels,D. S. Andreasen et al., A wireless brain- machine interface forreal-time speech synthesis, PloS one, vol. 4, no. 12, p. e8218,2009.

  11. E. Formisano, F. De Martino, M. Bonte, and R. Goebel, whoissaying what? brain-based decoding ofhuman voice andspeech, Science, vol. 322, no. 5903, pp.970973, 2008.

  12. T. Blakely, K. J. Miller, R. P. Rao, M. D. Holmes, and J. G. Ojemann,Localization and classification of phonemes using highspatial resolution electrocorticography (ECoG) grids, in Engineeringin Medicine and Biology Society, 2008. EMBS 2008. 30thAnnual International Conference of the IEEE. IEEE, 2008, pp.49644967.

  13. X. Pei, D. L. Barbour, E. C. Leuthardt, and G. Schalk, Decodingvowels and consonants in spoken and imagined words using electrocorticographic signals in humans, Journal of neural engineering, vol. 8, no. 4, p. 046028, 2011.

  14. S. Kellis, K. Miller, K. Thomson, R. Brown, P. House, andB. Greger, Decoding spoken words using local field potentialsrecorded from the cortical surface, Journal of neural engineering,vol. 7, no. 5, p. 056007, 2010.

  15. J. S. Brumberg, E. J. Wright, D. S. Andreasen, F. H. Guenther, and P. R. Kennedy, Classification of intended phoneme productionfrom chronic intracortical microelectrode recordings in speechmotor cortex, Frontiers in neuroscience, vol. 5, 2011.

Leave a Reply