Voice Assistant for Coronavirus Disease (COVID-19)

DOI : 10.17577/IJERTCONV10IS04061

Download Full-Text PDF Cite this Publication

Text Only Version

Voice Assistant for Coronavirus Disease (COVID-19)

Dr. T. Arumuga Maria Devi

Associate Professor1

Centre For Information Technology Engineering Manomaniyam Sundaranar University, Tirunelveli, India

R. Arun Bothagar,

PG Scholar2

Centre For Information Technology & Engineering Manomaniyam Sundaranar University,

Tirunelveli, India

M. Divya Magesh, PG Scholar3

Centre For Information Technology & Engineering, Manomaniyam Sundaranar University,

Tirunelveli, India

Abstract The Corona virus has caused ruin over the globe. The Pandemic is still on the ascent as all the nations on the planet have forced a lockdown. Robotics is a promising area that can assist to reduce the spread of corona virus. So, we have developed a voice assistant to assist the people for corona virus. A voice assistant is a program which can understand human speech and act accordingly. So, in our scenario we developed a voice assistant for corona virus infected people with the help of machine learning. The concept of tress was used which gave appropriate results when specific parameters were encountered. The user gave his symptoms as an input with the help of pythons speech to text library. The Proposed covid voice assistant gives us information whether a patient is affected with corona virus or not. This useful for everyone especially for disabled people because it works with voice .

  1. INTRODUCTION

    In late 2019 in Wuhan, China, the novel coronavirus (COVID-19) was initially detected. The victims there had pneumonia [1]. The virus was identified as belonging to the genus beta-coronavirus, placing it in the same family as other deadly viruses like Severe Acute Respiratory Syndrome (SARS) and Middle East Respiratory Syndrome that were previously identified (MERS). Currently, the virus has spread to more than 200 nations. Early in 2020, the WHO designated this virus a worldwide health emergency. By July 5, 2020, this epidemic had claimed the lives of more than 540,000 individuals worldwide, including more than 130,000 Americans [2].

    The number of COVID-19 cases in hospitals and clinics around the world has surpassed all expectations. People are going to clinics and hospitals for unrelated ailments due to widespread panic and rumors. These visits are straining the healthcare system and driving up expenditures while also spreading disease. As a result, researchers are looking into self-evaluation as a potential remedy. Self-assessment is a method that has been employed in the healthcare industry for a long time since it promotes learning, improved performance, and the development of self-agency and authority [3, 4].

    Applications that have recently been developed to self- assess the COVID-19 include, these programs are useful, not everyone may have access to them. Additionally, such programs are useless for someone who cannot read, use a computer, or who is blind. As a result, we suggest a novel solution in this paper: using the VA for COVID-19 self- assessment. Based on medical condition, this interactive tool aids in more accurate clinical decision-making on seeking medical attention or resting at home without placing an undue pressure on hospitals during these difficult times.

  2. RELATED WORK

    A wearable healthcare assistant was created to record environmental and physiological data before to the COVID-

    19 pandemic. This Life Minder prototype was utilized to detect continuous speech, user actions and postures, as well as pulse waves. The collected data was transferred to a medical PC and made accessible to users on a web page. HealthPal is a personal medical assistant with intelligent speech that some researchers have developed for use in health self-monitoring. This software was created to aid senior folks in independently monitoring their health. A robotic helper that can interact with patients, take vital signs, and record data was also built by researchers. The robot has a 3D face that could express various emotions and was connected to a blood pressure sensor.

    The majority of healthcare assistants in the early 2000s concentrated on using wearable technology and computer- based software programs. Researchers have begun to investigate the application of voice technologies capable of decision-making as a healthcare assistant in light of recent advancements in artificial intelligence. To address the issue that patients with wearable health sensors experience, some researchers created patient-focused voice and online services using Amazon Alexa and Google Assistant. The created assistant might also offer advice, make doctor appointments, and remind the patient of their upcoming appointments and therapy.

  3. PROPOSED ARCHITECTURE

    An overview of the suggested VA architecture is shown in Fig. 1. The architecture is made up of the user interface, communication, and analytical layers. The user interface layer is made up of many hardware tools and elements that interact with people, such as smartphones, smart speakers, computers, tablets, smart TVs, and Echo, where input can be given orally or through text input. Networks and protocols used to connect physical devices from user interface levels to the cloud platform in analytical layers can be found in communication layers, which include smart internet, Wi-Fi, Bluetooth, broadband, and cellular.

    The two blocks that make up the analytical layer are the decision logic block and the Natural Language Processing (NLP) block.

    The user requests the hardware devices at the user interface layer by spoken utterance to begin the process. The request then moves to the communication layer, where a language processor in the analytical layer receives voice- based input. Speech recognition software in the language processor converts the spoken word to text. This procedure, known as speech-to-text (STT) conversion, produces text output to assist the computer in understanding spoken commands. Similar to this, the semantics processor and context generator blocks aid in additional text processing to decipher and comprehend user commands. The established decision logic is then given together with the command to the backend block. The backend exchanges stored data with the database repository through communication as well. Following back-end processing, the response is routed back to the language processor block for text-to-speech (TTS) conversion before being sent back to the user as speech.

    Fig 1 Proposed Generalized Architecture of VA

    . METHODOLOY

    The major goal of our investigation was to evaluate how well the suggested VA self-assessment performed during COVID-19. The VA will communicate with participants during the experiment and, after learning about their health state, will direct them through the procedure. The VA has been developed in such a way that it even provides a choice of what to answer for each question it poses in order to improve the interaction's effectiveness and user-friendliness. In the course of the experiment, many characteristics including participant and VA errors, the quantity of interactions between participants and VA, the impact of VA on participant performance, and overall testing duration were measured.

    1. COVID-19 Self Assessment Protocol

      In this section, we discuss the different cases considered in our application and subsequent recommendations made to the users based on their input to the VA. Asper the CDC and WHO recommendations, the structure of guidelines that we followed in our application has been depicted in Fig.2 [14],

      .To provide better recommendations to the user, we divided th different conditions identified till date into two categories: infected and not-infected. Users facing any of the symptoms falling under the infected category were recommended to call 108 and visit the emergency immediately. Additionally, users were advised to stay home, contact medical professionals by phone or online tools, and use over-the-counter medications if they experienced symptoms similar to those described in the not infected. Additionally, if the users had lately been in contact with anybody Covid-19 Infected Not-Infected, had recently attended a large gathering of individuals, or had visited an area significantly affected by COVID-19 Fever, cold, and being among sick people They were advised to remain in quarantine after being diagnosed with COVID-19 and contact medical personnel via phone or online for any recommendations after developing a fever, cold, and cough. The user was deemed safe and told to keep their distance from others if they did not fit into any of the aforementioned two categories.

    2. Experimental Setup

      The experiment had five execution steps under different settings, although each participant underwent a different number of steps overall depending on the circumstances. Similarly, depending on how the user responds to the questions posed by the VA, testing time may last anywhere between 25 and 35 seconds the procedure starts with a question concerning infected instances and would instantly stop and offer a matching recommendation if any of the cases are seen. A participant will transfer to the non-infected zone and check for various conditions if they don't experience any of the cases from the infected zone. Finally, if nothing is observed, the VA will declare the patient safe and enter the safe green zone. Participants in the testing phase. The following questions :

      VA:I am covid voice assistant .do you want to corona virus test?

      User: of course VA: Do you fever?

      User: YES/NO VA: Do you cold? User: YES/NO

      VA: Do you cough? User: YES/NO

      VA: Do you breathing problem? User: YES/NO

      VA: Have you been contact with corona virus infected people?

      User: YES/NO

      VA: If YES Visit room no1 If No Visit no room1

    3. Data Collection

    The user gives the symptoms of corona virus as input and fed the Machine with output.The user collects the symptoms from various hospitals and websites. Then fed the Machine with proper input and output.The machine works according to the feded input.

  4. RESULT

As the major goal of the paper is to make tha machine user friend. The machine can be used by everyone educated, un educated , and even the blind can use it because it is used by voice.

By using twitter we have created an app for better visualization. The app has an login page with Username and password. Then takes the user to the main page. AS shown in below

The user is provide with login id Username: Admin

Password: admin

Figure 1 User Login

Fig 2 Correct User id

Fig 3wrong User id

Fig 4 Visualization main page

. EXPERIMENTAL RESULT

Fig. 5

Fig. 6

Fig 5&6 represents Infected People

Fig 7

Fig 8

Fig 7&8represents not-infected People

. CONCLUSION

The voice assistant functions in accordance with the concept and algorithm it was created with. The system welcomes the user at startup based on the time of day and starts with a power switch button. The system responds to commands by looking for specific keywords, also known as wake-up words,

and then executing the corresponding Python code block. The voice assistant reacts in a flash, promptly, and with sufficient precision. For the system to respond to the user, there must be reliable internet access. The voice assistant apologizes to the user and requests a repeat of the instruction if the system is unable to answer or locate the proper match from the keywords in the voice command.

REFERENCES

[1] J. Bryner, 1st known case of coronavirus traced back to november in china, 2020, Accessed 29 May 2020. [Online]. Available: https://www.livescience.com/first- case-coronavirusfound.html

[2] N. Myers-Wright, B. Cheng, S. N. Tafreshi, and I. B. Lamster, A simple self-report health assessment questionnaire to identify oral diseases, International dental journal, vol. 68, no. 6, pp. 428432, 2018.

[3] S. Nishiguchi, H. Ito, M. Yamada, H. Yoshitomi, M. Furu, T. Aoyama,

T. Tsuboyama, T. Ito, A. Shinohara, T. Ura et al., Daily assessment of rheumatoid arthritis disease activity using a smartphone application: Development and 3-month feasibility study, in Proceedings of the 8th International Conference on Pervasive Computing Technologies for Healthcare, 2014, pp. 414417.

[4] T. Suzuki and M. Doi, Life minder: an evidence-based wearable healthcare assistant, in CHI01 Extended Abstracts on Human Factors in Computing Systems, 2001, pp. 127128.

[5] A. Komninos and S. Stamou, Healthpal: an intelligent personal medical assistant for supporting the self-monitoring of healthcare in the ageing society, in Proceedings of UbiHealth. Citeseer, 2006

[6] I. Lopatovska, K. Rink, I. Knight, K. Raines, K. Cosenza, H. Williams,

P. Sorsche, D. Hirsch, Q. Li, and A. Martinez, Talk to me: Exploring user interactions with the amazon alexa, Journal of Librarianship and Information Science, vol. 51, no. 4, pp. 984997, 2019.

[7] WHO, Coronavirus Disease (COVID-19) Situation Report- 33, 2020,

Accessed 29 May 2020. [Online]. Available: https://www.who.int/docs/default-source/coronaviruse/situation- reports/20200222-sitrep-33-covid-19.pdf P. Dhakal, Novel architectures for human voice and environmental sound recognition using machine learning algorithms, 2018.

AUTHOR'S PROFILE

Dr. T. Arumuga Maria Devi Received B.E. degree in Electronics

& Communication Engineering from Manonmaniam Sundaranar University, Tirunelveli, Tamil Nadu, India, in 2003, M.Tech degree in Computer & Information Technology

from Manonmaniam Sundaranar University, Tirunelveli, Tamil Nadu, India, in 2005, also received Ph.D degree in Information TechnologyComputer Science and Engineering, from Manonmaniam Sundaranar University, Tirunelveli, Tamil Nadu, India, in 2012 and also the Associate Professor of Center for Information Technology and Engineering of Manonmaniam Sundaranar University since November 2005 onwards. Her research includes Signal Processing, Remote Communication, Multimedia and Mobile Computing.

M. Divya Magesh, MSc Data Analytics, Centre for Information Technology & Engineering, Manonmaniam Sundaranar University, Abishekapatti, Tirunelveli – 627012, Tamilnadu, India. She received her in Bachelor of Mathematics in Sri GVG Vishalakshi College for

Womens,Udumalpet. Her Research includes Machine learning,Natural Language Processing.

ArunBothagar. R, Msc CyberSecurity, Centre for Information Technology & Engineering, Manonmaniam Sundaranar University, Abishekapatti, Tirunelveli – 627012, Tamilnadu, India.He received Bachelor of Networking in

Subbulakshmipathi College of Science ,Madurai-2021. His Research includes Social Enfineering.Ethical Hacking Using Linux Platform.