Our Connect Sense: A Web-App to Enhance Accessibility and Learning for Physically Challenged Students

DOI : 10.17577/IJERTV12IS050112

Download Full-Text PDF Cite this Publication

Text Only Version

Our Connect Sense: A Web-App to Enhance Accessibility and Learning for Physically Challenged Students

Payal Maheshwari

Computer Science & Engineering Vellore Institute of

Technology Vellore, India

Bikash Chauhan

Computer Science & Engineering Vellore Institute of

Technology

Vellore, India

Ayush Kanaujiya

Computer Science & Engineering Vellore Institute of

Technology

Vellore, India

Navamani T

Faculty Computer Science & Engineering Vellore Institute of Technology Vellore, India

Harsh Rajpal

puter Science & Engineering with specialisation in Information Security Vellore Institute of Technology Vellore, India

Abstract – Our Connect Sense is a web application designed to facilitate the learning process for physically challenged learners.

The project offers a variety of features, such as playing games, completing tests, and listening to text, while navigating the web application using a search bar or voice commands. This initiative aims to provide a simple and accessible method for physically disabled students to interact with and learn from educational content. These pupils can overcome physical obstacles and enhance their learning experience with the aid of Our ConnectSense.

Keywords ConnectSense, web-app, physically challenged students, learning, accessibility, games, tests, voice commands, web development, voice synthesis.

  1. INTRODUCTION

    The internet has revolutionised the way we learn and share information, making it an essential tool for education and knowledge exchange. However, people with disabilities often encounter significant barriers when attempting to access online resources. For individuals who are blind or visually impaired, screen readers can be prohibitively expensive and challenging to use. Meanwhile, deaf or hard of hearing individuals may not be able to access online videos or lectures without captions or transcripts. To address these challenges,

    we have developed "Our Connect Sense," a web application designed to enhance accessibility and learning for students with physical disabilities, including those who are deaf.

    "Our Connect Sense" is a comprehensive web app that provides a variety of features designed to make online

    learning more accessible and user-friendly for students with physical disabilities. The app includes games, quizzes, and text that can be read aloud, all of which can be navigated using a search bar or vocal commands. One of the app's most significant features is its ability to convert speech to text. This feature enables impaired users to read along with lectures, videos, and other audio content, ensuring that they do not miss out on any important information. The app is designed to be easy to use and navigate for students with physical disabilities. Its interface is user- friendly and intuitive, with clear and concise instructions that make it simple to access and use all of its features. The app's voice-command navigability makes it even easier for users to interact with educational content and access all of its features.

    Accessibility is a critical component of the app's design. Our Connect Sense was developed with the goal of improving accessibility for physically challenged students, including those who are deaf or visually impaired. The app's developers understand that not all students have access to the same resources, and that cost can be a significant barrier for many. As such, the app was designed to be affordable and accessible to all students, regardless of their financial situation.

    The app's object recognition feature is particularly useful for visually impaired students. It enables them to identify objects by capturing an image of the object with their phone's camera. The app then uses a speech synthesiser to transform the identified object into an audio format, providing visually impaired students with an enhanced learning experience.

    In addition to providing accessibility features, the app's developers have also focused on creating an engaging and interactive learning experience for students. The app's games and quizzes are designed to be fun and educational, providing students with an enjoyable way to learn and improve their knowledge.

    In conclusion, "Our Connect Sense" is a web application designed to improve accessibility and learning for physically challenged students, including those who are deaf or visually impaired. Its features are designed to be affordable and easy to use, making it accessible to all students, regardless of their financial situation. By making online learning more accessible and user- friendly, we hope to empower physically challenged students and help them reach their full potential.

  2. LITERATURE SURVEY

    Research in the area of object detection is going since last decade. C. M. Asad et al [1], designed a model for hands-free human computer interaction and head movements to control mouse motions. It focuses on the categorization of a disabled person's head motions for hands-free computer operation. V. Kalist, A. A. F. Joe and A. Veeramuthu [2], proposed a strategy for classroom communication transforms the wordless student's sign language into a text message for the deaf to understand and an audio message for the blind to follow. The model supports Bluetooth-enabled voice assistance for device control. L. Ciabattoni, F. Ferracuti, G. Foresi and A. Monteriù [3], formulated a unique method for creating a user interface for commercial Smart Home Systems (SHS) that takes into account the demands of users who are blind or deaf. Using a mobile application, the interface may convert alerts and visual information into audio signals and vice versa. B. F. Smaradottir, S. G. Martinez and J. A. Håland [4], has worked on assistive technology and found that voice guidance and haptic feedback were effective in helping visually impaired users complete tasks on touchscreens. The methodology involves recruiting visually impaired individuals to participate in the study and using a combination of surveys, observations, and task performance measures to evaluate the usability and effectiveness of different assistive technologies. J. A. P. De Jesus, K. A. V. Gatpolintan,

    C. L. Q. Manga, M. R. L. Trono and E. R. Yabut [5], presented a framework for developing mobile and web applications to assist visually impaired individuals. The framework consists of three main components: Image recognition and object detection, Text recognition, and Navigation and guidance. They present results of user testing with visually impaired individuals. The users found the framework to be helpful in improving their independence and mobility, as well as facilitating tasks such as reading and identifying objects. S. Noel [6] proposed an approach that focuses on creating email applications that can translate speech to text and text to speech for users who are blind or visually challenged. The best email format for users who are blind or visually impaired is one that uses speech or audio. The voice commands used by the application are straightforward and user-friendly. The user can check the dictated content in the email once it has been created using text to speech. S.

    Dobrisek, J. Gros, F. Mihelic and N. Pavesic [7] proposed a database interface in which text data are grouped in text corpora at the information source centre. For blind and visually challenged users, the system's spoken language interface is crucial, notably the text-to-speech component. The system's voice control makes iteasier to operate with it since it eliminates the need for mechanical interfaces like a mouse or keyboard. The system is made up of three primary modules: the text to voice module, spoken command recognition module, and the conversation module.

    K. S. Sri, C. Mounika and K. Yamini [8] suggested an audiobook with a Python-made web user interface. There are a total of 4 modules, including image to text and voice, PDF to speech, text into speech, and text into voice. Reading a book, listening to a recorded text, spelling a word, and penning a sentence are among the assignments.

    The text-to-speech module will read the entered text aloud, assisting those who are deaf in accurately spelling the word. The voice to text module aids in turning spoken words into written ones. The deaf can benefit from it and readers can get better at reading. The Picture to Speech module will convert both the text format and the image's text into voice. The full PDF file will be converted into speech using PDF to Speech.

  3. PROPOSED WORK

    Fig(i): System Architecture

    Fig(ii): Use case diagram

  4. METHODOLOGY

    ConnectSense consist of 3 modules which are object detection, text to speech and speech to text. The system for the blind focuses on object recognition, with object identification as its primary function. A speech synthesiser is then used to convert the identified objects into an audible format. The system is capable of capturing conversations and converting them to text so that deaf individuals can comprehend them. These 3 modules play an important role independently. Object detection module identify the objects and helps blind people. Speech to text module will help the deaf people to understand what people are saying as ConnectSense will capture the voice and will convert it into text. Finally, Web speech will allow easy navigation through the application using the voice commands.

    1. YOLO Model(You Only Look Once)

      • The YOLO (You Only Look Once) model is an object detection algorithm that can be used to identify and locate images' objects.

      • In the context of ConnectSense, the YOLO model can be used to detect and identify objects in the camera's real-time video feeds. This would enable the device to identify and track objects within its field of view, such as persons, cars, and other relevant items.

      • Using YOLO in ConnectSense could improve its security and surveillance capabilities, as well as its ability to assist visually impaired individuals. For instance, the device could provide real-time object recognition for those with visual impairments or alert the user when a person or object enters their vicinity.

      • In conclusion, the YOLO model can is be implemented in ConnectSense to improve its object detection and recognition capabilities, allowing the device to identify and track objects in real-time video streams captured by its camera. This could have substantial implications for security, surveillance, and assisting those with visual impairments.

    2. Speech Synethesis API of JavaScript

      The Speech Synthesis API in JavaScript generates synthetic speech from text using concatenative synthesis. This entails selecting appropriate speech segments from a database of pre-existing speech segments based on the input text and desired speech settings. Selecting the appropriate segments based on linguistic rules and probabilistic models, and employing signal processing techniques such as filtering and smoothing to enhance the quality of the output speech constitute the exact algorithm used.

      In particular, the algorithm employs linguistic and probabilistic models such as language models, acoustic models, prosodic models, and pronunciation models to select

      appropriate speech segments from the database and concatenate them to generate the desired speech output. Language models predict the probability of a specific word sequence occurring in the input text, whereas acoustic models predict the probability of a specific speech segment given its acoustic features. Prosodic models predict the prosodic characteristics of the output speech, whereas pronunciation models predict the correct pronunciation of every word in the input text.

      Using the Speech Synthesis API, these linguistic and probabilistic models are combined to generate synthetic speech from text. Depending on the specific implementation and speech database used, the precise models and algorithms utilised may vary.

    3. Web Speech API for Voice Navigation

      • We can use the Web Speech API provided by modern web browsers to implement voice navigation in the ConnectSense application.

    • The Web Speech API provides speech recognition and speech synthesis directly in the browser, allowing users to navigate an application using voice commands. Using feature detection, it is first determined if the user's browser supports the Web Speech API. Alternative navigation options can be provided if the browser does not support the API.

    • To enable speech recognition, we must define event listeners for the various speech recognition events and construct a Speech Recognition object. We can then initiate and terminate the speech recognition process as required.

    • When a user speaks a command, the speech recognition API generates a result object comprising the recognised text, which can be used to initiate the corresponding navigation action in the ConnectSense application.

    • For speech output, we can create a Speech Synthesis object and use it to generate speech based on the navigation prompts and feedback of the application.

    • It is possible to use synthesised speech to corroborate user commands and provide additional information.

    • Finally, the ConnectSense application's voice navigation feature must be tested and refined as necessary to ensure optimal performance and usability. We can create a seamless and efficient user experience for ConnectSense application users through meticulous design and testing.

    The final web app integrates these three modules on a single platform. The programme is also voice- command navigable, making it simpler for physically challenged students to interact with educational content and navigate the app. By making online learning more accessible and user-friendly, we aim to empower physically challenged students and assist them in reaching their full potential by the use of latest

    technology.

  5. RESULTS AND DISCUSSION

Our Connect Sense web app can be easily used by voice commands, The user can create an account and all data recorded by the application is stored in local database. Our application will help a large population of physically disabled problems because of its simplicity to use. Its use is not just limited to disabled persons but can be used by anyone for day-to-day activities. Our application itself guides the user about the features available on the app and hence makes it easier for first time users to use the application without having the need to go through user manual. The user manual available in our app guides the user with available features in the app.

We have developed an application that is beneficial for both visually and audially impaired people increasing the use case in more area. Its designed in a way that even a novice user can enjoy the benefits of our product. The speech navigation makes it very simple for them to access the application. The aim of this product is to help students in their daily academic activities. We strive to keep making the life of differently abled people better and efficient by continuously adding and updating more features.

The possibility are limitless and we are sure this will be a huge uality of living for the people

CONCLUSION AND FUTURE WORK

The Our Connect Sense software offers a lot of room for growth in terms of accessibility for those with physicl limitations. For further learning possibilities, the app's capabilities might be increased to incorporate extra exercises, tests, and audio material. By including features that help those with cognitive impairments or learning difficulties, the app may also reach students who are not physically disabled.

The programme may be developed into an online learning platform that offers certification programmes or degrees to students with physical disabilities. The software can help close the accessibility gap in education for physically challenged students, giving them access to the same possibilities as their peers, as its reach and functionalities grow. The Our Connect Sense programme has the potential to significantly contribute to making the Internet a more welcoming and accessible place for all users with further development and improvement.

Future development can be done by adding the support for native languages which will help the user to select the language of his choice. This will enhance the number of people getting benefited by our applicat

REFERENCES

[1] C. M. Asad et al., "Removing Disabilities: Controlling Personal Computer Through Head Movements and Voice Command," 2018 IEEE 12th International Conference on Application of Information and Communication Technologies (AICT), Almaty, Kazakhstan, 2018, pp. 1-4,

doi: 10.1109/ICAICT.2018.8747123.

[2] V. Kalist, A. A. F. Joe and A. Veeramuthu, "An Aid for Speechless, Visually and Hearing Impaired Students in Classroom," 2020 5th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 2020, pp. 104- 108, doi: 10.1109/ICCES48766.2020.9137961.

[3] L. Ciabattoni, F. Ferracuti, G. Foresi and A. Monteriù, "Hear to see – See to hear: a Smart Home System User Interface for visually or hearing-impaired people," 2018 IEEE 8th International Conference on Consumer Electronics – Berlin (ICCE-Berlin), Berlin, Germany, 2018, pp. 1-2, doi: 10.1109/ICCE- Berlin.2018.8576163.

[4] B. F. Smaradottir, S. G. Martinez and J. A. HÃ¥land, "Evaluation of touchscreen assistive technology for visually disabled users," 2017 IEEE Symposium on Computers and Communications (ISCC), Heraklion, Greece, 2017, pp. 248-253,

doi: 10.1109/ISCC.2017.8024537.

[5] J. A. P. De Jesus, K. A. V. Gatpolintan, C. L. Q. Manga,

M. R. L. Trono and E. R. Yabut, "PerSEEption: Mobile and Web Application Framework for Visually Impaired Individuals," 2021 1st International

Conference in Information and Computing Research (iCORE), Manila,

Philippines, 2021, pp. 10.1109/iCORE54267.2021.00055.

205-210, doi:

[6] S. Noel, "Human computer interaction(HCI) based Smart Voice Email (Vmail) Application – Assistant for Visually Impaired Users (VIU)," 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 2020, pp. 895-900, doi:

10.1109/ICSSIT48917.2020.9214139.

[7] S. Dobrisek, J. Gros, F. Mihelic and N. Pavesic, "HOMER: a voice-driven text-to-speech system for the

blind," ISIE '99. Proceedings of the IEEE International Symposium on Industrial Electronics (Cat. No.99TH8465), Bled, Slovenia, 1999, pp. 205- 208 vol.1, doi: 10.1109/ISIE.1999.801785.

[8] K. S. Sri, C. Mounika and K. Yamini, "Audiobooks that converts Text, Image, PDF-Audio & Speech- Text : for physically challenged & improving fluency," 2022 International Conference on Inventive Computation Technologies (ICICT), Nepal, 2022, pp. 83-88, doi: 10.1109/ICICT54344.2022.9850872.