- Open Access
- Authors : Abin Abraham , Arjun Sajeev , Deepak P Menon , Sreehari J, Lani Rachel Mathew
- Paper ID : IJERTV11IS100046
- Volume & Issue : Volume 11, Issue 10 (October 2022)
- Published (First Online): 19-10-2022
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Transportation Aiding System for the Visually Impaired
Abin Abraham, Arjun Sajeev, Deepak P Menon, Sreehari J, Lani Rachel Mathew
Department of Electronics and Communication Engineering,
Mar Baselios College of Engineering and Technology, Trivandrum, India
AbstractThe world is becoming increasingly busy, and visually impaired persons encounter numerous obstacles in this ever-changing environment. One of the major challenges faced by them is usage of public transportation, viz., bus travel. In order to solve the problems and to guarantee safety for visually impaired and blind people (VIBP), a Transportation Aiding System for Visually Impaired has been proposed. This assistive technology based system comprises a SIM800A GSM-GPRS Module with GSM antenna, Neo-6m GPS Module, Arduino UNO Board and Speaker module. This system works with the help of a mobile application. The system uses voice recognition to interface with the user. Since this system uses a GPS module, the user receives real-time information in the form of speech signals regarding the path he is travelling. The proposed method has been tested and confirmed to be effective and simple to use.
Keywords Visually Impaired and Blind People, Mobile Application, Public Transport, Global Positioning System
India is home to about 1.4 billion people, out of which a majority of people depends on public transport for their daily commutation. Those people include women, children, aged people, and people with disabilities. Visually impaired and blind people (VIBP) face many issues in their daily lives and one among them is public transportation. In the current situation, public transport management systems (PTMS) have been implemented in almost all cities. This system plays a significant part in improving the efficiency. Information regarding the arrival and leaving time of the bus is displayed ondigital screens, but the needs of the blind users are not considered. Over the years, several technologies have been developed to help the VIBP but their functionalities depend onconstant internet connection and several other factors which make them complex for the users. This paper focuses on the VIBP section of society, enabling a user to identify the bus in advance and making bus travel easier and simpler.
According to our studies, Global Positioning System (GPS) technology was found to be the perfect alternative among all. In GPS, it is easy to navigate and it is of low cost as compared to other navigating devices. GPS has 100% coverage on earth i.e.,
it is available everywhere on the earth. It is very easy to integrate in other devices. Its easy to implement in bus because each bus takes same route. The bus tracking is done by using GPS technology. In our project, VIBP arriving at the bus stop, he/she can then select the bus tracking option through the app in order to identify the bus from the previous stop onwards. Once the bus has arrived, the speaker announces the route from the current bus stop, this helps user to find the position of the doorand can board the bus. After that when the user is inside the bus, he/she can select the stop tracking option which gives the verbal instruction of the upcoming bus stop. With the help of this, user can decide whether to leave at that particular stop or not. All the verbal instructions in the given to the user is already fed in the database. The route information from the speaker is from the current stop to next stops and not the previous stops. These data are also stored in another database.
The paper is organized as follows: Section II presents the literature survey. Section III presents the system description. Section IV presents the validation experiments and systemrecognition, which were carried out in a university under controlled conditions. Section V presents the system in outdoor navigation through successfully validation experiments carried out in a city in a real environment. In Section VI, the experimental results and discussion are presented. Finally, in Section VII, some conclusions are drawn.
LITERATURE SURVEY Technologies for helping blind and the visually
challenged people have made a tremendous growth which focuses onproviding support to daily activities for them to fit into the current society without any discrimination. Some examples of the works accomplished mainly provides assistance to avoid obstacles and to be navigated within a fixed environment and also a system which identifies the surrounding objects. Only a few of the systems incorporates both navigation as well as recognition. These problems were solved by the system which is described in this paper. It mainly focuses on autonomous movement, and identification and recognition of objects present in an indoor environment. Light hardware equipment such as camera, IMU and Laser sensors were placed on the chest within a compact sized box. This requires a database for
the storage of prior information of the objects present in the environment before the usage of this device. The main algorithms used were completely based on machine learning approaches. Speech recognition modules were used for the purpose of communication between the user and the system. The system which had been proposed has two main functions of helping the user to walk through a specific path by avoiding all types of obstacles in the path and also it gives information to the user about all the objects present in the field of view. These have been illustrated through various experiments within a fixed environment. This system is completely confined within a particular space and it works only through a specific set of data which has been already stored in the database which gives an idea of what is in front of an individual who is currently using the proposed system.
Next technology that can be used and helps the Visually Impaired People is by incorporating Bluetooth Technology which allows the accurate indoor localization of people visiting any cultural institution. Consider the users are equipped with a Bluetooth Low Energy (BLE) device which are normally found in modern smartphones, which does the transmission of packets, which will be received at the BLE receivers in that particular area. The received packets are sending to the server to locate the position of the visitors inside that area. The positions are located on the feed-forward neural network trained by a measurement campaign. It also provides the way for the deployment of the BLE receivers in a particular area. The system was found to have an increased accuracy when considering the position of each of the users within the range of the receivers and user needs to be within a specific distance so that the data gets transmitted. Along with that there are few limitations of using this technology which includes low range of operation, low data transmission rate, vulnerability, signal variations (which can cause the loss in accuracy of the system).
A Wearable smart system for visually impaired people can be introduced and the main advantage of this proposed system is that it can be used to navigate the visually impaired people(VIP) in public places. It makes use of a wearable smart system and it is also designed to seek assistance when required. The system comprises of a number of sensors, GPS modules and it also requires a cellular communication along with a solar panel. The sensors work in a combination in order to identify the obstacles in the path of the user. The system also provides a vibration feedback to the user in case if the user is in a noisy environment. The main disadvantage of te system is that the user must be made aware of how the system works and he/she will not be able to understand the system properties well by themselves. The system is a bit bulky as it includes the solar panel also for providing power backup and makes it uncomfortable for some of the users in the different sections of society.
The use of tele-guidance can also be considered as a way of helping the Visually Impaired People It is mainly designed to act as a navigation assistant for the visually impaired and blind
people. This mainly works by a compactly packed system with a mobile phone that is placed in front of the chest of the user which gives a live feed of the point of view to the users caretaker or any other individual. This live feed is viewed by the caretaker and gives information to the VIBP and guides them in order to avoid any obstacles in the path. A sense of touch type of feedback might be provided to the user through the smart cane. Two main problems were tested. The first was based on the right-left movement and the other was done to provide different vibration patterns for different purposes. Group discussions and questionnaire were served as ways to acquire feedback from users. The users of the system gave positive results. The blind people mainly preferred one actuator in place of two actuators. Similarity with the cane usage was the important factor in the choice of this project. The users find the mobile phone camera provided sufficient information to the caretakers for their assistance. But, they find angle as one of the important problems to be considered. Some of the users found it difficult to mount the device and also the device is costly compared to other similar devices having same application. The Speed and connection stability of the network is also an important factor to be considered. Advancements in Wi-Fi technology can be incorporated into system that can be used to help the Visually Impaired People. This involves utilization of Wi-Fi hot spots that is being set up in bus stops nowadays. But according the current scenario all the stops in each state doesnt provide free Wi-Fi connectivity at each and every bus stops. It is too difficult to store the information of each and every router in the database also all the routers might not be having open connectivity which might lead to inability to identify some of the bus stops.
The approaches used to arrive at the desired solution are primarily discussed in this chapter. The kind of hardware and software and also the components, main technologies that is used is also discussed in this chapter. In the hardware section consists of GPS based system which is based on GPS-GSM technology. This is basically a device that is to be placed inside the bus for constant tracking of the bus. In the software side, consists of mobile application that has to be installed in the users phone. The is developed using MIT app inventor platform. The working of the system is mainly dependent on the app. There are three options in the app, (i) Stop tracking, (ii) Bus Tracking, and (iii) Stop can be used by the user for the purpose of identifying the buses and the bus stops. The instructions are provided to the user verbally for efficient utilization appropriate exponents. The stop locations are already provided in the database so as the user gives his destination the app notifies whether there is any bus within a specific distance approaching the current stop where the user is standing. The stop database is activated once the user is inside the bus in order to give information about the next stops in advance for the user to be prepared. Once the bus is arriving, it provides information to the user in the form of verbal instructions. If no bus is approaching, then no bus approaching
will be the output. Once the user is inside the bus, by clicking the Bus stop tracking option he will be able to know the next bus stop in advance so that he can get off at the required stop. The app is made user friendly to the maximum extent. The buttons for activating voice recognition has been kept large so that it dominates the screen and the user will find it easier to activate it.
Fig. 1 App Architecture
A. Process from user Point of View
It involves finding the right bus with the bus tracking option in the app and boarding the bus. The user reaches the stop and opens the app in order to provide the destination as voice command. The app then tracks the buses available and provides information to the user. The speaker makes the boarding procedure easier and he can enter the bus. In the bus, user finds the right stop to leave the bus with the help of stop tracking option in the app. As soon as the stop tracking starts navigation starts from the next stop onwards and after each and every stop the navigation of the previous stop turns off and the navigation towards the next stop starts. This makes it more efficient in terms of increasing the speed of gathering information as only one navigation works at a particular time.
Fig. 2 Methodology
The final technique for the execution of the solution consists of the following elements and tools after reviewing various servers' components and options and weighing their benefits and drawbacks:
A GPS module consists of certain antennas and tiny processors that receive the signals sent by each satellite and also considers the time of arrival of each signal. Communication support is provided to the users. GPS modules are compatible with Arduino and Raspberry Pi. GPS Modules can be used in a variety of ways. Many social activities, in particular, can be developed through the use of these GPS Modules. As a result, GPS modules are used in a variety of industries, such as environmental measurement, transportation, emergency rescue, agriculture, entertainment, and so on. This module is used to continuously track the bus. With the help of GPS module details regarding the location of bus routes are provided to the users using this module via app. The GPS device which is used in this system consists of a SIM800A GSM GPRS module and a NEO-6M module.
GPS module traces and obtains the location information from satellites in the form of longitude and latitude. The micro-controller performs the processing of this information and sends it to the GSM modem. The GSM modem in the end sends the information to the owner's cellular device. Arduino UNO is used for controlling the whole unit with a GPS Receiver and GSM module. The SIM800A Quad-Band GSM/GPRS module with RS232 Interface is a complete Quad-band GSM/GPRS solution in the LGA (Land grid array) form factor that may be used in customer applications. SIM800A has quad-band 850/900/1800/1900 MHz capability allowing it to carry voice, SMS, and data while consuming very little power. The SIM800A modem contains a SIM800A GSM chip and an RS232 interface, making it simple to connect to a PC or laptop via the USB to Serial adapter or a micro controller via the RS232 to TTL converter. The specification of this module include input voltage (9V-12V DC), operating temperature (-40 Â° C ~ + 85 Â° C), bands (GSM 850MHz, EGSM 900MHz, DCS 1800MHz, PCS 1900MHz), Protocol used is TCP/IP protocol stack.
The NEO 6M module, as shown in Fig 4.2, series comprises the part of that family of high-performance u-blox 6 positioning engines used in conventional GPS receivers. Unlike other GPS modules, it can update its location up to 5 times per second, with a horizontal position precision of 2.5 meters. The 6 positioning engine, u-blox, has Time to First Fix of less than a second. Power Saving facility of the chip is one of the most useful features. The ON and OFF part of the receiver can be
selected by switching parts, it provides for a reduction in system power usage. This decreases the module's power consumption to considerably low values, which makes it more suitable and ideal for power-constrained uses such as GPS wristwatches. The NEO-6M GPS hip's essential data pins are separated into 0.1 pitch headers. UART communication with a microcontroller is included which has various pins required for different purposes. It is having input voltage of about 2.7 ~ 6V, Operating Range of Temperature of -25 to 88Â°C, serial baud rate is 4800-230400.
ARDUINO UNO BOARD
The Arduino Uno is an open source microcontroller board based on the Microchip ATmega328P microprocessor built by Arduino.cc. Digital and analogue input/output pins on the board can be utilized to connect to possible adaptation boards (shields) and certain other circuits. The board has 14 digital I/O pins (six of which would be capable of PWM generator) and 6 analog I/O pins, and it can be programmed using the Arduino IDE and a type B USB connector (Integrated Development Environment). It provides a variety of communication options for connecting to a Portable computer, another Arduino board, or other microcontrollers connected to a computer. There is a combination of about 6 continuous signals which are time variant, an Electronic Oscillator, a Universal Serial Bus, a Power Connector, a Voltage Regulator, ICSP (In-Circuit Serial Programming) Header and a Reset Button. The Arduino Uno board establishes communication between the GPS module and the app through the thingspeak server. It is the most widely used microcontroller in various fields of the electronics design when seen from a prototype field of view. The ports can be set to send data to the components connected to the board making it possible to handle the different needs of the user. In our system, mainly the MP3 module, GSM Module and the GPS Module are connected to the microcontroller. The type of protocol used in the communication is UART, I2C and SPI (Serial Peripheral Interface). The clock speed is approximately 16MHz and has a flash memory of 32 KB. A SRAM is also provided which is of size 2 KB (ATmega28P)
DFPLAYER MINI SPEAKER MODULE
The DFPlayer Mini is a small and inexpensive MP3 module that may be attached directly to the speaker and has a DC voltage range of 3.3-5 volts. The battery-powered module with speaker and keyboard can be used alone or with the Arduino UNO or any microcontroller with a serial port module through serial port control. The hardware decodes MP3, WAV, and WMA files was perfectly incorporated into the module. The software includes a TF card driver that supports the FAT16 and FAT32 file systems. Can be done using simple serial commands that specify the music player, as well as how to play music and other functions, without the need for a complicated operating system. The digital data which is stored in the SD
Card is read by the Mini player and it is converted to the analog form by the digital to analog converter in the module itself. The specifications include the sampling rates which is different in different cases of digital data being processed in the DF Player Mini Module. Input voltage can be between 3.2 to 5V but the standard value for the Module 3.3 V. FAT 16 and FAT 32 file formats are supported in the player but NTFS is not supported. Since it is a MP3 module, the only supported standard is MP3.
MIT APP INVENTOR
MIT App Inventor is a user-friendly, visual programming environment that enables anyone including children to create fully working Android, iPhone, and Android/iOS tablet apps. In less than 30 minutes, those new to MIT App Inventor may have a simple first app up and running. Furthermore, our blocks-based technology allows you to create complicated, high-impact programme in a fraction of the time it takes in typical programming environments. The MIT App Inventor initiative aims to democratize software development by allowing anyone, especially young people, to transition from technology consumption to production The coding used in MIT App Inventor is block coding. The App Inventor servers store your work and help you keep track of your projects. Thus there is no loss of work done by the user.
It consists of and makes extensive use of a GUI, or graphical user interface, which is similar to Scratch (a programming language) and Star Logo in that it allows users to drag and drop a variety of visual objects to create applications that can run on Android-based devices, as opposed to the App Inventor Companion (a programme that enables the app for running and debugging on), which makes a foundation for the app. In the process of creating various Inventors for Apps, Google drew its attention to various research analysis that were significant for various computing of educational purposes, and for private work done by Google on various environments which is used for online environments. Projects involving the App Inventor are based on constructionist learning theories, demonstrating that programming can be utilized as a medium of transportation for a variety of potent ideas that are engaging through a significant source of active learning. t contributes to ongoing research in the realm of computers and education that surrounds it. On the basis of an experimental Firebase Realtime Database component, the App Inventor is supporting the use of cloud data. In essence, the App Inventor is a free cloud-based tool that enables users to create their own apps using a programming language that utilises a variety of block sizes and types. A web-based browser, such as Chrome, Firefox, or Safari, can be used to access the App Inventor. Various software, including the operating systems Mac OS, Linux/GNU, and Windows, are supported by the App Inventor's development environment. Any Android-based phone can easily accept the installation of the Software Inventor-based app.
ThingSpeak server is a cloud-based IoT analytics tool that lets you aggregate, visualise, and analyse live data streams. ThingSpeak delivers real-time representations of data sent to the platform by your devices. With the ability to run MATLAB code in ThingSpeak, you can analyse and handle data as it comes in real time. ThingSpeak is frequently used for IoT system prototype and proof of concept that require analytics. It is also equipped with a variety of built-in numerical computing capabilities from software, such as MATLAB from MathWorks, which enables ThingSpeak users to visualise and analyse uploaded data using MATLAB without having to purchase a licence from MathWorks. Additionally, articles on ThingSpeak have appeared on specialised "Maker" websites like Instructables, Codeproject, and Channel 9 that enable the processing and display of diverse digital data. In this server the data obtained from the GPS module (Latitude and Longitude points) is uploaded to it. Its features includes easily configure devices to send data to ThingSpeak using popular IoT protocols, visualize your sensor data in real-time, aggregate data on-demand from third-party sources, use the power of MATLAB to make sense of your IoT data, run your IoT analytics automatically based on schedules or events, prototype and build IoT systems without setting up servers or developing web software, automatically act on your data and communicate using third-party services like Twilio or Twitter.
Openrouteservice (Website – short ORS) is much more than a website with a route service for cars, pedestrians and bicycles based on Open Standards and Open Geodata. Several Location Based Services (LBS) created from OSM data are available, developed by HeiGIT – Heidelberg Institute for Geo-Information Technology. Using a wide range of services based on OSM data, Openrouteservice is much more than just a routing service and can be used in a wide variety of scenarios and applications. In the current iteration of openrouteservice, the following services have been implemented.
1) The Directions Service chooses navigational routes and information based on a variety of factors. This has become apparent for: cars: shortest, suggested a few solutions to eliminate tools, tunnels, etc. many heavy vehicle proiles (Delivery, Forestry, Bus…) with a tonne of adjustable options Pedestrian (normal and hiking), wheelchair, and bicycle (regular, mountain, road, and electric) routes. 2) The Pois Service is a tool for finding the location of the nearest or most specific store, business, or service online.3) Using the street network in the vicinity of a given place, the Isochrones Service determines a polygon that represents the area that may be reached in a specific amount of time. 4) The Geocode Service offers a Geocoder/Reverse Geocoder, which normalises a location's description into a description of the location with a point geometry from a description of the location, such as a place name, street address, or postal code. Digitalized polygons
on the map can be avoided for later routing. GPS Tracks can be downloaded and uploaded in a variety of formats. Surfacetype, Waytypes, gradient, suitability for the chosen profile, as well as a height profile, can be displayed for the pedestrian and bicycle profiles.
IMPLEMENTATION OF SYSTEM
Fig 3 shows the architecture of the whole system. The sole element here is the application which is connected to two inputs or servers one is the Thingspeak server and other an open route service. The information collected by the hardware could not be directly given to the application. So the thingspeak server acts as an intermediate server which is used to upload the hardware information onto the server. The information from the navigation services is also fed into the application. The hardware which is fixed on the bus will upload the location information i.e. the latitude and longitude to the thingspeak server and then it is read in the app by using the API key. The application is designed such that it produces the result in the form of voice and text, which can be detected by the user. Here as per the real time condition information or route locations are taken from the navigation services and are uploaded onto the app for the proper functioning of the hardware system.
Fig. 3 Architecture of whole system
Fig 4 figure explains the basic working of the system. It mainly focuses on the fact that the user doesnt need the assistance of a second person in order to use the public transportation. The system is most suitable to be implemented using GPS Module according to the current scenario. Consider a user waiting at the stop to board a bus, he opens the app and gives voice instruction to where he wants to reach i.e. the destination. The app then provides information verbally to the user about the number of buses available in that route and also the time taken for the bus to reach the stop. Even the people who are not visually challenged can use this app as it has been designed for both the sections of society. The total distance to the destination and also the time of travel will also be provided. Once the bus reaches the stop, the speaker which is attached to the door will announce the route taken by the bus so that the user can make sure that he is entering the right bus. When
inside the bus, the app gives information to the user about each and every stop and also provides repeated announcement when the destination approaches. The program has been set to repeat the instruction so that it becomes easier for the visually challenged person to make sure that he got the right information. This involves a number of steps starting from the person opening the app for listing the number of buses available till the person reaches the destination and gets off the bus.
Fig. 4. Block diagram of functioning
The process is as follows: 1) The user (Visually Impaired and Blind Person) is initially at the bus stop waiting for his desired bus. 2) The user takes up his/her mobile phone and opens the app which is installed in his/her device. The mobile app opens and the user inputs the desired destination location in form of voice prompts. 3) The app then looks, if there are any buses available and calculates the distance between the stop location and destination location. If there are any buses available, the app notifies the user and informs the details regarding the arriving bus. 4) When the bus arrives, speaker which is present outside the bus alerts the user(VIBP) thus thereby helps the user in locating the bus door. 5) Upon entering into the bus, the app is now in the stop tracking mode where the user is notified about the upcoming stops. When the bus reaches closer to the destination stop the app alerts the user in form of voice signals.
6) When the user reaches the destination the app terminates automatically.
The experimental set consists of a hardware part and a software part. The hardware part (see fig 5) consists of a GPS based system which comprises of a Neo 6M GPS module, SIM800A GSM module which is interfaced with Arduino UNO along with a speaker module. The hardware part of the system is placed inside the bus. The software part is a mobile app which should be installed on the mobile phone of the user.
Fig. 5 Hardware
The synchronized data reading of localization's latitude and longitude coordinates from GPS device is viewed in the serial monitor of Arduino IDE for verification.
Fig. 6 Data obtained from GPS module
Whenever the GPS module detect a location its corresponding latitude and longitude coordinates is being uploaded to the thingspeak server and from there is data is obtained in the app.
Fig. 7 Data in ThingSpeak server
Next is the software section which is as previously discussed should be installed on the mobile.
Fig. 8 Application Screen 1
This is the welcoming screen of the application. In this screen there is single button which occupies almost 60%-70% of the screen. This button is used to record the place where the user has to go. Once the button is clicked the application will navigate to an intermediate screen where the voice input given by the user is captured. The starting screen is designed in such a way that the visually impaired user could almost easily access the button so as to give his or her voice input.
Fig. 9 Interim screen for recording the users voice input
The application is designed in such a way that it catches up with almost all type of English accents spoken by the user with a considerable range of accuracy. Also the application during testing phase worked flawlessly without any hang or delay. This screen pops up when the user enables the speech recognizer for receiving the voice instruction given by the user.
Fig. 10 Application Screen 2
Fig 10 shows the second screen of the application. Once the voice input(destination) from the user is recognized by the application, the second screen comes into play. This screen has more options and informations being passed to the user through voice alerts. First it announces the destination given by the user. Then the number of buses travelling through the given route will be announced. After that the estimated distance of the bus towards the stop where the user is currently standing is calculated and is announced through the application. Then follows a voice input button which the user needs to tap and give the confirmation whether they are ready to board the bus based on the informations given. If they by mistake gave the wrong input as destination, then they can navigate back to the previous screen or screen 1 by taping the button on screen 2 and saying no. If the person using the app is not visually challenged, then another option is provided for viewing the position of bus and all the details regarding the position of each and every single aspect. The button named Route navigates to that particular screen which is optional.
Fig. 11 Application Screen 3
Fig 11 shows the third screen of the application which is optional as said before. After clicking on the Route button the screen 3 opens otherwise it will move to the screen 4. Here the distance and time for the entire journey is stated along with the ap so as to show through which path the bus will be travelling so as to take the user to his or her required destination. This screen was provided in order to make the app to be useful for each and every person not making it confined to the visually challenged people in the society.
This is the final screen of the application which allows the user to see their current location by checking the map provided. For visually impaired ones voice assistance would be provided continuously so as to alert them about the upcoming stops so that they can get ready to get out of their bus when their desired location is reached.
Response Time: Response time refers to the amount of time required for uploading the data from the hardware to the thingspeak server and then calling the data from the server to the app by using API key. The response time is affected by factors such as network bandwidth, number of users, number and type of requests submitted, and average think time. A series of trial were taken and resulting response time values are as follows:
Response Time (s)
Table 1. Performance Parameters
Fig. 12 Application Screen 4
Fig 12 shows the fourth screen of the application. This screen asks the user whether he or she has boarded the chosen bus. If the answer is yes, then it will open up the navigation screen which gives information to the user about each and every stop that is approaching. This page collects the further status of the user, that is whether he or she has boarded the bus as per the information provided by the application and its associated sources.
Fig. 13 Application screen 5
Average Response time= 26.9 seconds
Accuracy: The accuracy in this system means that the how accurate the system is measuring the distance values. When the user and bus was at the same location it showed some variation in distance between them. A series of trial test runs was done and values obtained were as follows:
Table 2. Accuracy
Average Accuracy= 11.2 meters
In this busy environment the major concern of the visually impaired people is boarding of buses i.e. visually impaired people face difficulties in identifying the correct bus. There aremany conventional methods ranging from seeking assistance from second person to interacting with bus driver. These methods have one or the other demerits. This system solves the common difficulties faced by the visually impaired people. This system has multiple advantages like its gives the user with information regarding the bus, the route in which the bus reaches its destination which makes it easy for the user to select the appropriate bus. This system is user-friendly, less complicated and more precise compared to similar systems. So for the safe bus travel of the visually impaired people this system helps a lot. The proposed system also helps in improving and be part in the development of a city that adapts to the need of the users, using technologies for the sustainability and helps in making the life easier for a VIBP (Visually Impaired and Blind People). Compared to other projects, the mobile app has only one button on each page for making it as comfortable as possible for usage by the visually challenged user. The use of GPS in this project, makes this project applicable anywhere in the world with internet.
The authors wish to convey my gratitude to Prof. Dr. Jayakumari J from the Department of Electronics and Communication Engineering at Mar Baselios College of Engineering and Technology in Trivandrum, India, for the valuable guidance and suggestions throughout the duration of our research work. We are obliged to Ms. Ancy S. Anselam, Associate. Professor, our project coordinator and Dr. Lani Rachel Mathew, Assistant Professor, our project guide, both from the Department of Electronics and Communication Engineering at Mar Baselios College of Engineering and Technology in Trivandrum, India, for all the help and guidancegiven to us for doing this project and her contribution towards the successful completion of the project preliminary. We take this opportunity to thank all the teaching staff, our seniors and colleagues who have directly or indirectly helped in our project.Our acknowledgement would not be complete without gratitude to our parents who have been pillars of support and constant encouragement throughout the project. Salvador MartÃnez-cruz, Luis A. Morales-HernÃ¡ndez, Gerardo
PÃ©rez-Soto, Juan P. Benitez-Rangel, and Karla a. Camarillo- GÃ³mez; An Outdoor Navigation Assistance System for Visually Impaired People in Public Transportation, IEEE Access, vol. 9, pp. 1-5, 2021
M. A.-M. Salem, and A. Khamis, Recovering the sight to blind people in indoor environments with smart technologies,
Expert Syst. Appl., vol. 46, pp. 129138, Mar.2016 A. Ramadhan, Wearable smart system for visually impaired people, Sensors, vol. 18, no. 3, p. 843, Mar. 2018  B. Chaudary, S. Pohjolainen, S. Aziz, L. Arhippainen, and P. Pulli, Teleguidance- based remote navigation assistance for visually impaired and blind peopleUsability and user experience, Virtual Reality, pp. 118, 2021