Trusted Scholarly Publisher
Serving Researchers Since 2012

Visual Impairment Support

DOI : https://doi.org/10.5281/zenodo.19511559
Download Full-Text PDF Cite this Publication

Text Only Version

Visual Impairment Support

Sweta Waghmare

Department of Electronics and Telecommunication Engineering Pillai College of Engineering Panvel, India

Reva Patil

Department of Electronics and Telecommunication Engineering Pillai College of Engineering Panvel, India

Shravani Margaj

Department of Electronics and Telecommunication Engineering Pillai College of Engineering Panvel, India

Vinanti Bhoir

Department of Electronics and Telecommunication Engineering Pillai College of Engineering Panvel, India

Shivam Bhakare

Department of Electronics and Telecommunication Engineering Pillai College of Engineering Panvel, India

Abstract – Visual impairment severely limits an individuals ability to interact with and interpret their surroundings. Traditional assistive aids such as canes or guide dogs provide only limited environmental awareness. This paper presents the design and implementation of a smart assistive device integrating IoT sensors, computer vision, and artificial intelligence to enhance mobility and safety for visually impaired individuals. The proposed system employs a Raspberry Pi 4, ultrasonic sensors, and a camera module for obstacle detection and object recognition. Real-time feedback is delivered through a buzzer and GSM-based alerts. Experimental results demonstrate accurate obstacle detection up to 4 m and reliable communication of emergency messages. The solution offers a low-cost, portable, and scalable framework to improve independent navigation for the visually impaired.

Index TermsVisual impairment, assistive technology, Embedded system, Raspberry Pi, ultrasonic sensor, artificial intelligence, GSM.

  1. INTRODUCTION

    ‌The ability to move freely and safely is one of the most fundamental aspects of human independence. For individuals with visual impairments, however, even basic navigation tasks can be overwhelming due to the lack of visual cues. According to the World Health Organization (WHO), more than 285 million people worldwide live with some form of visual impairment, including 39 million who are completely blind and 246 million with low vision [1]. The impact extends beyond mobilityit affects education, employment, and social participation.

    Conventional mobility aids such as white canes provide a tactile sense of surroundings but have inherent limitations. Their detection range rarely exceeds one meter, and they cannot differentiate between stationary and moving obstacles. Guide dogs, on the other hand, offer enhanced mobility but are costly to train and maintain, and their availability is limited [2]. These factors underline the urgent need for affordable, intelligent, and portable systems that extend the users perception beyond the immediate tactile range.

    The convergence of AI, embedded computing, and IoT technologies has enabled smarter assistive devices that bridge this gap. Devices equipped with sensors and cameras can perceive the environment, recognize objects, and relay this information to users through sound or vibration feedback. Unlike purely mechanical aids, these systems interpret data in real time, mimicking certain aspects of human vision and decision-making.

    The work presented here, titled Visual Impairment Support, proposes a wearable assistive system built around a Raspberry Pi 4 platform that integrates ultrasonic sensing for obstacle detection, computer vision for object identification, and GSM communication for emergency alerts. Mounted on a cap or pair of glasses, the device continuously scans the environment and provides feedback via audio or haptic signals. The primary objective is to enhance personal safety and independence without compromising affordability or usability.

  2. RELATED WORK

    ‌The pursuit of assistive technologies for visually impaired users has evolved over decades, with many early solutions relying on ultrasonic sensors for distance measurement. Meenakshi and Rajesh [2] proposed a cane-based system that employed ultrasonic transceivers to detect nearby obstacles and produce warning beeps. While simple and effective, such systems were limited to short-range detection and could not convey information about object type or movement.

    Subsequent research incorporated computer vision and AI models to recognize objects using cameras. OpenCV-based frameworks [3] and lightweight deep learning models have shown promising results in recognizing everyday objects with considerable accuracy. However, these models often require significant processing power, limiting their real-time performance on portable devices.

    Recent works have sought to combine sensor-based and vision-based systems for better situational awareness.

    Nguyen et al. [5] explored an IoT-enabled system that processed sensor data at the edge to reduce latency. Similarly, Singh and Yadav [6] proposed an AI-based vision system capable of classifying obstacles but noted constraints in outdoor usability due to lighting variations. These studies indicate a growing interest in multi-modal sensing but also highlight the trade-off between cost, power, and accuracy.

    The Visual Impairment Support System builds upon this foundation by integrating both ultrasonic ranging and AI-based image recognition on a single embedded platform. Unlike prior systems that depend on cloud connectivity or large-scale hardware, this approach prioritizes real-time operation, energy efficiency, and affordability, making it viable for everyday use in low-resource settings.

  3. ‌PROPOSED METHODOLOGY

    1. Overview:

      The Visual Impairment Support System (VISS) continuously monitors the users surroundings using both distance and vision-based inputs. The device operates in five sequential stages (Fig. 1):

      1. Data Acquisition: Sensors collect distance and visual data.

      2. Preprocessing: Noise reduction and frame enhancement for clearer image recognition.

      3. Object Detection and Classification:Real-time identification using OpenCV algorithms.

      4. Decision Logic: Determination of threat level and appropriate alert type.

      5. Feedback Delivery: Communication through audio and vibration, and GSM-based emergency notification.

        Each stage operates in real time on the Raspberry Pi 4, enabling immediate and context-aware feedback to the user

        identify patterns such as faces, vehicles, or signboards in real time. The algorithm is optimized for lightweight computation, ensuring smooth execution without external GPUs.

        The decision logic is implemented as a rule-based algorithm:

        If distance < threshold Activate buzzer/vibration.

        If recognized object {vehicle, wall, human} Trigger voice alert.

        If manual emergency button pressed Send SMS through GSM.

        This deterministic logic makes the system transparent, interpretable, and easy to debug.

  4. ‌SYSTEM ARCHITECTURE

    1. Flow of Operation:

      At startup, the Raspberry Pi initializes its sensors and begins capturing ultrasonic data to measure distances. Simultaneously, the Pi camera streams video frames that are

      D. Overview:

      Fig. 1. System Block Diagram

      processed to detect and classify surrounding objects. Once both data streams are synchronized, the system fuses the inputs to interpret the environment.

      If an obstacle is detected within a prdefined distance threshold (typically less than 50 cm), the system triggers the vibration motor. For identified moving or hazardous objects, an audio message is played to alert the user. In case of emergencies, the GSM module sends an SMS alert to a caregivers phone number stored in memory.

      This integrated operation ensures that even if one sensing modality fails (e.g., camera in low light), the other (ultrasonic sensor) continues to provide essential feedback.

    2. Algorithmic Process:

    The software running on the Raspberry Pi uses Python scripts to coordinate sensor readings and AI models. The object recognition employs Haar-cascade classifiers, which

    The architecture of VISS is designed to ensure compactness, modularity, and real-time performance. It consists of *three primary layers:

    1. Sensing Layer,

    2. Processing Layer

    3. Feedback and Communication Layer

    Each layer is tightly coupled yet individually replaceable, making future upgrades straightforward.

    E. Sensing Layer:

    This layer collects environmental information using multiple input modules. An ultrasonic sensor (HC-SR04) measures the distance between the user and surrounding

    obstacles, while a Pi Camera captures live video for object detection. A push button, connected to the PWKEY pin, allows the user to manually trigger emergency actions. All sensor grounds are connected to a common reference to ensure stable and accurate operation.

    F. Processing Layer:

    The Raspberry Pi 4 serves as the core processing unit. It runs Python-based scripts on Raspbian OS to fuse sensor data and execute computer vision algorithms using OpenCV Haar-cascade classifiers. A buck converter regulates the 5 V supply to 4.0 V, powering external modules. Decoupling capacitors of 1000 µF and 100 nF are added to minimize voltage ripples and ensure reliable power delivery.vision-based recognition to create a robust perception map of the users environment.

    G. Feedback and Communication Layer

    This layer delivers alerts to the user through multiple feedback mechanisms. A DFPlayer Mini module, powered through the buck converter, plays pre-recorded voice alerts via an 8 speaker, informing the user of detected obstacles

    A buzzer connected to GPIO18 provides immediate proximity warnings. A GSM module enables emergency SMS notifications. 1N4148 diodes are incorporated for signal isolation and protection, while all grounds are interconnected to prevent ESD damage and system instability.

    Function

    GPIO Pins

    Buzzer

    GPIO18

    DFPLayer RX

    GPIO20

    DFPlayer TX

    GPIO16

    PWKEY

    GPIO26

    Table I. The GPIO pin mapping between the Raspberry Pi

  5. ‌IMPLEMENTATION AND RESULTS

    Fig. 2. Image Capturing and Object Detection

    Fig. 3. Prototype and Hardware Demonstration

    The prototype was assembled according to the circuit diagram (Fig. 4.2), which integrates the Raspberry Pi 4, Pi Camera, ultrasonic sensor, buck converter, DFPlayer Mini module, buzzer, and speaker. The circuit ensures proper power regulation and noise suppression through decoupling capacitors and diodes, enabling stable operation across different environments.

    Extensive testing was conducted in both indoor and outdoor scenarios to evaluate detection range, response time, and accuracy. Table II summarizes the performance metrics for various test scenarios.

    Test Scenario ( %)

    Avg Range (m)

    Detection Accuracy (%)

    Response Time (s)

    Indoor Corridor

    2.5

    98

    1.5

    Outdoor Pavement

    3.8

    96

    2.0

    Low Light

    1.5

    85

    1.8

    Table II. Performance metrics for various test scenarios.

    The results confirm that the ultrasonic sensor maintains stable readings across lighting conditions, whereas the cameras recognition accuracy dips under low light. GSM alerts are successfully delivered within 810 seconds in urban network areas.

    User feedback indicated increased confidence in movement and reduced anxiety while navigating unfamiliar spaces. The lightweight and wearable nature of the device further improved comfort and usability.

  6. FUTURE WORK

    ‌Future versions will focus on:

    Integrating GPS tracking for location-based alerts, Enhancing night vision using IR sensors,

    Adding speech-based navigation for richer feedback

    ‌Improving power efficiency through hardware optimization. These upgrades aim to evolve the prototype into a comprehensive smart assistive system capable of continuous adaptation to user environments.

  7. ACKNOWLEDGMENT

The authors express their sincere gratitude to the Department of Electronics and Telecommunication Engineering, Pillai College of Engineering, Panvel, for their continuous support and encouragement throughout the development of this project. Special thanks to the faculty mentors for their valuable guidance, technical insights, and motivation that greatly contributed to the successful completion of this work.

‌VII.CONCLUSION

The proposed Visual Impairment Support System represents a practical and affordable assistive solution. By merging AI, IoT, and embedded computing, it enhances situational awareness and personal safety for visually impaired users. The systems modular design, cost efficiency, and real-time response make it a promising candidate for large-scale deployment in developing regions. More importantly, this project highlights the potential of empathetic engineeringtechnology designed with human needs at its core. By helping individuals regain independence, such innovations contribute to social

inclusion and quality of life.

REFERENCES

‌[1]World Health Organization, Global Data on Visual Impairment, 2023.

[2]S. Meenakshi and R. Rajesh, Obstacle Detection for Visually Impaired using Ultrasonic Sensors, IEEE Trans. Assistive Tech., vol. 9, no. 3, 2021.

[3]OpenCV Developers, Object Detection with Haar Cascades, 2024.

[4]Raspberry Pi Foundation, Raspberry Pi 4 Technical Guide, 2024.

[5]T. Nguyen et al., IoT-Based Assistive Systems for Accessibility, Sensors Journal, vol. 22, no. 11, 2022.

[6]K. Singh and P. Yadav, AI-Based Vision System for the Blind, Int. J. Advanced Research in Engineering Science, vol. 7, no. 4, 2022.