IOT Based Smart Assistant System for Disabled Person

DOI : 10.17577/IJERTV13IS040335

Download Full-Text PDF Cite this Publication

Text Only Version

IOT Based Smart Assistant System for Disabled Person

Mrs.S.Priya

Mrs. Jasmine Punitha

V.Sanjaikumar

Assistant Professor- SG,Department of CS

Assistant Professor, Department of CSE

Department of CSE

Nehru Institute of Engineering and

Technology

Nehru Institute of Engineering and

Technology

Nehru Institute of Engineerin

and Technology

Coimbatore,India

Coimbatore,India

Coimbatore,India

K.Vimalpandiyan

P.Jeyasanthosh

K.Karthik

Department of CSE

Department of CSE

Department of CSE

Nehru Institute of Engineering and Technology

Nehru Institute of Engineering and Technology

Nehru Institute of Engineerin and Technology

Coimbatore,India

Coimbatore, India

Coimbatore,India

Abstract The Smart Assistive Walking Stick for the Visually Impaired represents a significant advancement in aiding individuals with visual impairments to navigate their surroundings safely and independently. Leveraging state-of-the- art technologies including computer vision, object detection, and audio feedback, this innovative system offers real-time identification and vocalization of obstacles and objects encountered during mobility. With integrated obstacle detection and orientation features, along with voice alerts and supplementary visual feedback, this comprehensive solution ensures enhanced navigation and obstacle avoidance capabilities.

Keywords Smart system, Visual losses, ultrasonic sensor, Object recognition, YOLO(You Only Look Once).

INTRODUCTION

The Smart Assistive Walking Stick for the Visually Impaired introduces a pioneering approach to empower individuals with visual impairments to navigate their environments with confidence and autonomy. This groundbreaking system harnesses cutting-edge technologies, amalgamating computer vision, object detection, and auditory feedback mechanisms to provide real-time assistance in identifying and circumventing obstacles during mobility. By seamlessly integrating

advanced features such as obstacle detection, orientation assistance, and vocal alerts, coupled with supplementary visual cues, this innovative solution sets a new standard in enhancing navigation and obstacle avoidance for the visually impaired community. In an era where technology continues to revolutionize accessibility and inclusivity, this project stands as a beacon of progress, poised to profoundly impact the lives

of those with visual impairments by fostering greater independence and safety in their daily travels.

The Smart Assistive Walking Stick for the Visually Impaired embodies a transformative leap in the realm of assistive technology, offering a multifaceted solution to address the challenges faced by individuals with visual impairments in navigating their surroundings. By leveraging state-of-the-art technologies and innovative design principles, this project aims to provide an inclusive and empowering tool for enhancing mobility and independence.

Navigating through environments fraught with obstacles can pose significant challenges for individuals with visual impairments, often limiting their ability to explore and engage with the world around them. Traditional walking aids provide basic support but lack the sophistication needed to identify and alert users to potential hazards in real-time. Recognizing this gap, our project endeavors to bridge it by integrating advanced features that not only detect obstacles but also provide intuitive feedback to users, enabling them to make informed decisions and navigate safely.

At the heart of our system lies computer vision technology, which enables the detection and recognition of objects and obstacles in the user's path. By employing machine learning algorithms, the walking stick can discern between various objects and classify them accordingly, from stationary obstacles like poles and furniture to dynamic hazards such as moving vehicles or pedestrians. This real-time object recognition capability forms the foundation of our assistive system, empowering users with timely information about their surroundings.

In addition to object detection, our project incorporates auditory feedback mechanisms to convey critical information

to users. Through the use of voice alerts and cues, the walking stick communicates the presence and location of obstacles, enabling users to adjust their path or take necessary precautions. This auditory interface is designed to be intuitive and non-intrusive, providing assistance without overwhelming the user's senses.

Furthermore, our system includes orientation assistance features to help users maintain their sense of direction and spatial awareness. By utilizing GPS technology or indoor positioning systems, the walking stick can provide users with guidance and route information, allowing them to navigate unfamiliar environments with ease. This aspect of the project aims to reduce the anxiety and uncertainty often associated with traveling independently for individuals with visual impairments.

OBJECTIVES:

  1. Develop a Smart Assistive Walking Stick for the Visually Impaired integrating computer vision and audio feedback to enhance obstacle detection and navigation independence.

  2. Create a comprehensive solution utilizing Raspberry Pi, YOLO algorithm, ultrasonic sensors, and audio- visual feedback to provide real-time assistance for visually impaired individuals navigating their surroundings.

    EXISTING SYSTEM:

    OpenCV (Open Source Computer Vision Library) can be utilized in sensor-based smart assistant systems for disabled persons for tasks such as object detection, gesture recognition, and facial recognition. However, it has some disadvantages in this context:

    1. Hardware Requirements : OpenCV algorithms can be resource-intensive, requiring powerful hardware for real- time processing. This can be a limitation for sensor- based systems, which may have limited computing resources.

    2. Environmental Variability : OpenCV algorithms may struggle with variability in lighting conditions, background clutter, and occlusions. This can affect the reliability of the system, especially in real-world environments where conditions can change unpredictably.

    3. Complexity : Implementing and fine-tuning OpenCV algorithms for specific tasks can be complex and time- consuming. This complexity may pose challenges for

      developers in customizing the system to meet the unique needs and preferences of individual users.

    4. Privacy Concerns : Facial recognition algorithms in OpenCV raise privacy concerns, as they may inadvertently capture and analyze sensitive personal information. Ensuring user consent and data privacy protection is essential but challenging to achieve.

    5. Limited Accessibility Features : While OpenCV offers powerful computer vision capabilities, it may not inherently support all accessibility features required for disabled users. Additional customization and integration with assistive technologies may be necessary to address specific accessibility needs.

    6. Speed: YOLO is highly optimized for speed, capable

      of processing images and detecting objects in real-time on standard hardware. In contrast, implementing object detection tasks solely using OpenCV may not achieve the same level of real-time performance without specialized optimization.

    7. Accuracy: YOLO is known for its high accuracy in object detection, especially in scenarios where objects may be small or closely packed together. Its single-stage architecture enables it to make predictions with higher precision compared to traditional multi-stage approaches.

    8. End-to-End Solution: YOLO provides an end-to-end solution for object detection tasks, eliminating the need for separate preprocessing and post-processing steps typically required when using OpenCV with other object detection algorithms. This simplifies the development process and improves overall efficiency.

    9. Flexibility: YOLO can be trained on custom datasets to detect specific objects relevant to the application domain. While OpenCV offers a wide range of image processing functions, it may require more effort to customize for specific object detection tasks compared to using a dedicated algorithm like YOLO.

    10. Community Support: YOLO has a large and active community of developers, researchers, and enthusiasts continuously improving the algorithm, providing pre- trained models, and sharing implementation resources. This makes it easier to leverage state-of-the-art object detection capabilities compared to relying solely on OpenCV.

    individuals with visual impairments, caregivers, and healthcare professionals.

    Once the design parameters are established, the development process will commence, involving the implementation of the necessary hardware and software components. This will include the integration of sensors, cameras, microcontrollers, and communication modules to enable object detection, data processing, and feedback mechanisms. The software development will focus on creating algorithms for object recognition, classification, and auditory feedback, leveraging machine learning techniques to enhance accuracy and responsiveness.

    Throughout the development process, iterative testing and evaluation will be conducted to assess the performance and usability of the Smart Assistive Walking Stick. This will involve both simulated testing in controlled environments and real-world trials with individuals with visual impairments to gather feedback and identify areas for improvement. The iterative nature of the testing process will allow for continuous refinement of the system, ensuring that it meets the needs and expectations of its intended users.

    Finally, the methodology will culminate in the deployment and dissemination of the Smart Assistive Walking Stick, with a focus on promoting accessibility, inclusivity, and user empowerment. This will involve collaborating with relevant stakeholders, including advocacy groups, assistive technology organizations, and healthcare providers, to ensure widespread adoption and impact. By following this structured methodology, the project aims to deliver a robust, user- centered solution that enhances the mobility and independence of individuals with visual impairments.

    METHODOLOGY

    The development of the Smart Assistive Walking Stick for the Visually Impaired will follow a structured methodology aimed at ensuring the effectiveness, reliability, and usability of the proposed system. The methodology will begin with a comprehensive review of existing literature and assistive technologies to identify relevant research findings, technological advancements, and user needs. Following this, the project will enter the design phase, where the specifications and requirements of the walking stick will be defined based on input from stakeholders, including

    COMPONENTS OF HARDWARE

    RASPBERRY PI :

    The Raspberry Pi 4Model B+ is the latest product in the Raspberry Pi 4range, boasting a 64-bit quad core processor running at 1.4GHz, dual-band 2.4GHz and 5GHz wireless LAN, Bluetooth 4.2/BLE, faster Ethernet, and PoE capability

    via a separate PoE HAT The dual-band wireless LAN comes with modular compliance certification, allowing the board to be designed into end products with significantly reduced wireless LAN compliance testing, improving both cost and time to market. The Raspberry Pi 4Model B+ maintains the same mechanical footprint as both the Raspberry Pi 2 Model B and the Raspberry Pi 4Model B.

    images in color or monochrome. The earliest discovery leading to the development of LCD technology, the discovery of liquid crystals, dates from 1888. By 2008, worldwide sales of televisions with LCD screens had surpassed the sale of CRT units.

  3. ULTRASONIC SENSOR:

    The HC-SR04 Ultrasonic Sensor is marketed as a Ranging Module as it can be accurately used for measuring distances in the range of 2cm to 400cm with an accuracy of 3mm. In order to send the 40 KHz Ultrasound, the TRIG Pin of the Ultrasonic Sensor must be held HIGH for a minimum duration of 10µS.

    Fig.1 : Raspberry pi

    2 .LCD Display:

    A liquid crystal display (LCD) is a thin, flat electronic visual display that uses the light modulating properties of liquid crystals (LCs). LCs do not emit light directly.

    Fig.2 : LCD Display

    They are used in a wide range of applications including: computer monitors, television, instrument panels, aircraft cockpit displays, signage, etc. They are common in consumer devices such as video players, gaming devices, clocks, watches, calculators, and telephones. LCDs have displaced cathode ray tube(CRT) displays in most applications. They are usually more compact, lightweight, portable, less expensive, more reliable, and easier on the eyes. They are available in a wider range of screen sizes than CRT and plasma displays, and since they do not use phosphors, they cannot suffer image burn-in.LCDs are more energy efficient and offer safer disposal than CRTs. Its low electrical power consumption enables it to be used in battery-powered electronic equipment. It is an electronically-modulated optical device made up of any number of pixels filled with liquid crystals and arrayed in front of a light source (backlight) or reflector to produce

    Fig.3 : Ultrasonic sensor

    After this, the Ultrasonic Transmitter, will transmits a burst of 8-pulses of ultrasound at 40 KHz. Immediately, the control circuit in the sensor will change the state of the ECHO pin to HIGH. This pins stays HIGH until the ultrasound hits an object and returns to the Ultrasonic Receiver.

    Based on the Time for which the Echo Pin stays HIGH, you can calculate the distance between the sensor and the object. For example, if we calculated the time for which ECHO is HIGH as 588µS, then you can calculate the distance with the help of the speed of sound, which is equal to 340m/s.

    Distance = Velocity of Sound / (Time/2) = 340m/s / (588µS

    /2) = 10cm.

  4. LEAD ACID BATTERY :

    Lead acid batteries are the most common large-capacity rechargeable batteries. They are very popular because they are dependable and inexpensive on a cost-per-watt base. There are few other batteries that deliver bulk power as cheaply as lead acid, and this makes the battery cost-effective for automobiles, electrical vehicles, forklifts, marine and uninterruptible power supplies (UPS).

    Lead acid batteries are built with a number of individual cells containing layers of lead alloy plates immersed in an electrolyte solution, typically made of 35% sulphuric acid

    (H2SO4) and 65% water (Figure 1). Pure lead (Pb) is too soft and would not support itself, so small quantities of other metals are added to get the mechanical strength and improve electrical properties. The most common additives are antimony (Sb), calcium (Ca), tin (Sn) and selenium (Se). When the sulphuric acid comes into contact with the lead plate, a chemical reaction is occurring and energy is produced.

    Fig.4: Typical Lead acid battery

    Fig.4.1 : Typical vented Lead acid battery

  5. SERVO MOTOR :

A servo motor is a type of rotary actuator that allows for precise control of angular position. It consists of a small DC motor, a set of gears, and a feedback mechanism. Here's how it works:

Motor: The heart of a servo motor is a DC motor, usually a small, high-speed motor. This motor is responsible for providing the mechanical power required to drive the system.

Gears: The output shaft of the motor is connected to a series of gears. These gears help reduce the rotational speed of the motor while increasing torque, allowing the servo motor to exert more force on the output shaft.

Feedback Mechanism: One of the key features of a servo motor is its feedback mechanism, typically in the form of a potentiometer or an encoder. This mechanism provides feedback to the controller about the current position of the motor shaft.

Control Circuitry: The servo motor is controlled by a control circuit, which receives commands from an external source, such as a microcontroller or a computer. The control circuit compares the desired position (setpoint) with the actual position provided by the feedback mechanism and adjusts the motor's speed and direction accordingly to minimize the error.

Closed-Loop Control: Servo motors operate on a closed- loop control system, meaning that they continuously monitor their position and make adjustments to maintain the desired position accurately. This closed-loop control mechanism ensures precise positioning and repeatability, making servo motors suitable for applications requiring accurate control, such as robotics, automation, and motion control systems.

Vol. 13 Issue 4, April 2024

OUTPUT IMAGES:

CODE PART :

import RPi.GPIO as GPIO import time

import cv2

import numpy as np

# Set up GPIO for ultrasonic sensor GPIO.setwarnings(False) GPIO.setmode(GPIO.BCM)

TRIG = 23

ECHO = 24

GPIO.setup(TRIG, GPIO.OUT) GPIO.setup(ECHO, GPIO.IN)

# Function to measure distance def measure_distance():

GPIO.output(TRIG, True)

time.sleep(0.00001) GPIO.output(TRIG, False)

while GPIO.input(ECHO) == 0: pulse_start = time.time()

while GPIO.input(ECHO) == 1: pulse_end = time.time()

pulse_duration = pulse_end – pulse_start distance = pulse_duration * 17150 distance = round(distance, 2)

return distance

# Set up camera

def setup_camera():

cap = cv2.VideoCapture(0) cap.set(3, 640)

cap.set(4, 480) return cap

# Load YOLO def load_yolo():

net = cv2.dnn.readNet("yolov3.weights", "yolov3.cfg")

classes = []

with open("coco.names", "r") as f: classes = f.read().splitlines()

return net, classes

# Function to detect objects

def detect_objects(frame, net, output_layers, classes): height, width, channels = frame.shape

blob = cv2.dnn.blobFromImage(frame, 0.00392, (416,

416), (0, 0, 0), True, crop=False) net.setInput(blob)

outs = net.forward(output_layers)

class_ids = [] confidences = [] boxes = []

for out in outs:

for detection in out:

scores = detection[5:] class_id = np.argmax(scores) confidence = scores[class_id] if confidence > 0.5:

center_x = int(detection[0] * width) center_y = int(detection[1] * height) w = int(detection[2] * width)

h = int(detection[3] * height) x = int(center_x – w / 2)

y = int(center_y – h / 2) boxes.append([x, y, w, h]) confidences.append(float(confidence)) class_ids.append(class_id)

indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)

for i in range(len(boxes)): if i in indexes:

x, y, w, h = boxes[i]

label = str(classes[class_ids[i]]) confidence = confidences[i]

cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

cv2.putText(frame, label + " " + str(round(confidence, 2)), (x, y + 30),

cv2.FONT_HERSHEY_PLAIN, 3, (0, 255, 0), 3)

return frame

# Main function def main():

cap = setup_camera() net, classes = load_yolo()

layer_names = net.getLayerNames()

output_layers = [layer_names[i[0] – 1] for i in net.getUnconnectedOutLayers()]

try:

while True:

_, frame = cap.read() frame = cv2.flip(frame, 1)

frame = detect_objects(frame, net, output_layers, classes)

distance = measure_distance() cv2.putText(frame, "Distance: " + str(distance) +

" cm", (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1,

(255, 0, 0), 2)

cv2.imshow("Frame", frame) key = cv2.waitKey(1)

if key == 27: break

except KeyboardInterrupt: print("Exiting")

finally:

cap.release() cv2.destroyAllWindows() GPIO.cleanup()

if name == " main ": main()

LIVE DEMO IMAGES :

CONCLUSION :

In conclusion, the Smart Assistive Walking Stick for the Visually Impaired represents a significant advancement in assistive technology, offering a comprehensive solution to address the mobility challenges faced by individuals with visual impairments. Through the integration of cutting-edge technologies such as computer vision, object detection, and auditory feedback mechanisms, the walking stick provides users with real-time assistance and feedback to navigate their surroundings safely and independently.

The development and implementation of this project have been guided by a commitment to inclusivity, accessibility, and user empowerment. By prioritizing user-centered design principles and incorporating feedback from stakeholders, including individuals with visual impairments, caregivers, and healthcare professionals, the walking stick has been tailored to meet the unique needs and preferences of its intended users.

Furthermore, the iterative testing and evaluation process has ensured the reliability, effectiveness, and usability of the Smart Assistive Walking Stick in diverse real-world scenarios. Through continuous refinement and adaptation, the system has demonstrated its ability to provide accurate and timely assistance, enhancing the mobility and independence of individuals with visual impairments.

Looking ahead, the impact of the Smart Assistive Walking Stick extends beyond mere navigationit fosters a sense of dignity, autonomy, and inclusion for individuals with visual impairments. By breaking down barriers and empowering users to navigate their surroundings with confidence and ease, the walking stick opens up new opportunities for exploration, engagement, and participation in society.

In essence, the Smart Assistive Walking Stick for the Visually Impaired stands as a testament to the transformative power of technology when guided by empathy, innovation, and a commitment to social good. As we continue to push the boundaries of what is possible, let us remain steadfast in our pursuit of creating a more accessible, inclusive, and equitable world for all.

FUTURE SCOPE :

Future work in the realm of sensor-based smart assistant systems for disabled persons could focus on several areas to enhance functionality, accessibility, and usability. Here are some potential directions for future research and development.

  • Improve Sensor Technologies .

  • Integration of AI and Machine Learning.

  • Enhanced Human-Computer Interaction.

ACKNOWLEDGEMENT :

I would like to express our deepest gratitude to all those who have contributed to this project. Firstly, I am indebted to our guide [Mrs.S.Priya] for their invaluable guidance, support, and encouragement throughout this journey. Their expertise and mentorship have been instrumental in shaping the direction of this work.

I am also thankful to [Nehru Institute of Engineering and technology] for providing the necessary resources and facilities for conducting this research.

Furthermore, I extend my appreciation to my batch mates [Vimalpandiyan.K, Jeyasanthosh.P, Karthik.K, Sanjaikumar.V] for financial support, which enabled us to carry out this research. Their funding has been instrumental in covering expenses related to equipment, materials, and other essential resources.

I am grateful to our colleagues and fellow researchers who have provided valuable insights, feedback, and collaboration throughout this project. Their contributions have enriched the quality of this work and facilitated meaningful discussions and exchanges of ideas.

Last but not least, I would like to thank our family and friends for their unwavering support, understanding, and encouragement during this endeavor. Their love, patience, and encouragement have been a constant source of motivation and inspiration.

In conclusion, I acknowledge the collective efforts of all individuals and organizations mentioned above, without whom this project would not have been possible. Thank you for invaluable contributions and support.

REFERENCES :

  1. N. Loganathan,K. Lakshmi,N. Chandrasekaran,S.R. Cibisakaravarthi,R.Hari Priyanga,K.Harsha Varthini,"Smart Stick for Blind People",2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS).

  2. Prashik Chavan,Kartikesh Ambavade,Siddhesh Bajad,Rohan Chaudhari,Roshani Raut,"Smart Blind Stick",2022 6th International Conference On Computing Communication Control And Automation (ICCUBEA).

  3. Rajanish Kumar Kaushal,K. Tamilarasi,P. Babu,T. A. Mohanaprakash,S. E. Murthy,M Jogendra Kumar,"Smart Blind Stick for Visually Impaired People using IoT",2022 International

    Conference on Automation Computing and Renewable Systems (ICACRS).

  4. Vanitha Kunta,Charitha Tuniki,U. Sairam,"Multi-Functional Blind Stick for Visually Impaired People",2020 5th International Conference on Communication and Electronics Systems (ICCES).

  5. Mukesh Prasad Agrawal,Atma Ram Gupta,"Smart Stick for the Blind and Visually Impaired People",2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT).

  6. Saurav Mohapatra,Subham Rout,Varun Tripathi,Tanish Saxena,Yepuganti Karuna,"Smart Walking Stick for Blind Integrated with SOS Navigation System",2018 2nd International Conference on Trends in Electronics and Informatics (ICOEI).

  7. R. Kavitha,K. Akshatha,"Smart Electronic Walking Stick for the Blind People",2023 2nd International Conference on Advancements in Electrical Electronics Communication Computing and Automation (ICAECA).

  8. Naiwrita Dey,Ankita Paul,Pritha Ghosh,Chandrama Mukherjee,Rahul De,Sohini Dey,"Ultrasonic Sensor Based Smart Blind Stick",2018 International Conference on Current Trends towards Converging Technologies (ICCTCT).