DOI : https://doi.org/10.5281/zenodo.20205929
- Open Access

- Authors : Dr. Manjunatha Siddappa, Madhan Gowda R S, Mohammed Hasham, Naveen Kumar N
- Paper ID : IJERTV15IS050928
- Volume & Issue : Volume 15, Issue 05 , May – 2026
- Published (First Online): 15-05-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
AI-Based Money Recognition Assistant For Blind People
Dr. Manjunatha Siddappa
Associate Proffessor, Dept of ECE S J C Institute Of Technology Chikkaballapur
Mohammed Hasham
Student, Dept of ECE, S J C Institute Of Technology Chikkaballapur
Madhan Gowda RS
Student, Dept of ECE, S J C Institute Of Technology Chikkaballapur
Naveen Kumar N
Student, Dept of ECE, S J C Institute Of Technology Chikkaballapur
Abstract – This paper presents an AI-Based Money Recognition Assistant for Blind People, a low-cost, wearable assistive system designed to enhance the autonomy and safety of visually impaired individuals. The proposed device integrates an ESP32-CAM microcontroller within a spectacle-based form factor, enabling real-time currency denomination recognition and obstacle avoidance. The system utilizes a lightweight Convolutional Neural Network (CNN) deployed via TensorFlow Lite for on-device currency classification, while an ultrasonic sensor continuously monitors the users surroundings to detect nearby obstacles. User feedback is provided through a speaker that announces the recognized denomination and a buzzer that issues proximity alerts. By performing all inference locally using Edge AI, the system ensures data privacy, low latency, and complete independence from cloud connectivity. The optimized CNN model, reduced through INT8 post-training quantization, achieves an inference time of approximately 1.2 seconds within the ESP32s limited computational resources. Experimental validation confirms a high recognition accuracy of 93.8% and reliable obstacle detection up to 200 cm, demonstrating the systems effectiveness as a holistic assistive platform for daily financial and navigational tasks.
Index Terms Edge AI, Assistive Technology, ESP32-CAM, Convolutional Neural Network (CNN), TensorFlow Lite, Currency Recognition, Obstacle Detection, Visually Impaired, Wearable Device, Quantization
I. INTRODUCTION
For millions of visually impaired individuals, navigating the world involves overcoming daily challenges that sighted individuals often take for granted. Among the most significant the risks associated with physical navigation. Identifying paper currency is a frequent source of difficulty, as the similar texture and size of different banknotes, coupled with worn-out tactile markings, can lead to errors and potential exploitation during cash transactions. At the same time, independent mobility remains a major safety concern. While traditional white canes effectively detect ground-level obstacles, they fail to provide protection against head-level hazards such as low-hanging branches, poles, or signboards. Consequently, users experience
a high cognitive load and persistent stress when navigating unfamiliar environments.
To overcome these limitations, this paper presents the AI-Based Money Recognition Assistant for Blind People, a low-cost, spectacle-mounted assistive system that integrates currency recognition and obstacle detection into a single wearable device. The system is built around an ESP32-CAM microcontroller, which performs real-time image acquisition and on-device currency classification using a lightweight Convolutional Neural Network (CNN) optimized through
INT8 post-training quantization. Simultaneously, an ultrasonic sensor enables continuous obstacle detection, enhancing user safety through proximity-based alerts.
Dr. Sharath Kumar Y. H. et al. [4] proposed Blind People Currency and Object Detection (2025), a real-time Android-based assistive application designed to enhance the independence of visually impaired individuals. The system employs the YOLOv8 deep learning algorithm for both object and currency recognition, integrated with Optical Character Recognition (OCR) for text extraction and Text-to-Speech (TTS) for audio feedback. A gesture-based swipe interface enables users to navigate between features seamlessly without visual input. Experimental results demonstrate 94.5% object detection accuracy, 95.2% currency recognition accuracy, and 91.3% OCR accuracy under diverse lighting conditions. The model achieves real-time performance with an average inference time of 180 ms per frame on standard Android hardware. Operating entirely offline, the system ensures data privacy, low latency, and accessibility, providing a scalable and practical assistive solution for daily use by the visually impaired.
M. S. Salimath [5] proposed Object Recognition and Currency Detection for Visually Impaired People, presenting an integrated approach that combines object recognition and currency detection to assist visually impaired users. The system employs the YOLO (You Only Look Once) deep learning model, optimized for real-time detection within live camera feeds. By leveraging computer vision and deep learning, the model effectively identifies both objects and currency notes, providing instant auditory feedback for user interaction. This dual-function capability enhances user independence and situational awareness across various environments. Experimental evaluations confirm the systems high accuracy and responsiveness, demonstrating its efficacy in real-world scenarios as an assistive solution for visually impaired individuals.
-
THE PROPOSED METHODOLOGY
The proposed system follows an experimental research methodology integrating hardware and software development. An ESP32-CAM microcontroller is used for image capture and processing, while an ultrasonic sensor assists in obstacle detection. A Convolutional Neural Network (CNN) model was trained on a custom dataset of Indian currency images under varied lighting conditions. The model was optimized using INT8 post-training quantization and deployed via TensorFlow Lite for efficient on-device inference. Real-time testing was conducted in both indoor and outdoor environments to evaluate accuracy, latency, and reliability. Data preprocessing and model training were performed using Python and OpenCV. Audio feedback through a speaker and buzzer ensures clear communication with the user. The methodology ensures a replicable, low-cost, and privacy-preserving solution for visually impaired users.
Fig. 1. Block diagram of AI-based money recognition assistant for blind people.
The proposed system is developed using a dual-controller embedded architecture centered on the ESP32 microcontroller. The ESP32 acts as the central control unit, managing all communication between sensors, processing units, and output modules. The power supply provides stable voltage regulation to both the ESP32 and the ESP32-CAM module, ensuring consistent performance during real-time operation.
The speaker module delivers clear audio feedback announcing the detected currency value, while the buzzer provides non-verbal obstacle alerts. All hardware components are mounted on a spectacle frame, creating a compact, wearable assistive system that provides hands-free, real-time interaction. This integrated design ensures low latency, energy efficiency, and complete independence from cloud connectivity,
At the center of the system is the ESP32, which acts as the main controller and decision-making unit. It manages all input and output operations, coordinating data between various connected components. The power supply provides the necessary operating voltage to all components, ensuring stable functioning of the ESP32, sensors, and peripherals.
On the input side, the system receives data from two sources: the ESP32-CAM and two ultrasonic sensors. The ESP32-CAM captures real-time images and processes them to identify currenc denominations. It then sends the recognition result (for example, 100 Rupees) to the main ESP32. The two ultrasonic sensors continuously monitor the environment for nearby obstacleseach placed on a different side (left and right) of the spectacles to provide a wider detection range. The sensors measure the distance to objects and send the data to the ESP32 for real-time analysis.
Based on the processed data, the ESP32 produces appropriate outputs. If either ultrasonic sensor detects an obstacle within a predefined safety distance, the controller immediately triggers the corresponding buzzer (Buzzer 1 or Buzzer 2). This dual-buzzer setup helps indicate the direction of the obstacleleft or rightthrough distinct audio feedback. Simultaneously, when the ESP32 receives the currency recognition data from the ESP32-CAM, it activates the earphone to play a voice announcement of the detected denomination, providing an audible cue to the user.
Overall, the flow shows a seamless interaction between sensing, processing, and alert mechanisms, where the ESP32 serves as the core hub managing both navigation (ultrasonic sensors and buzzers) and currency recognition (camera and earphone) in real time to assist visually impaired users.
-
COMPREHENSIVE ANALYSIS AND RESULTS
The proposed AI-enabled wearable assistive device was designed and implemented to enhance the independence and safety of visually impaired individuals. The system combines Edge AI-based currency recognition and ultrasonic obstacle detection within a compact, spectacle-mounted framework. Comprehensive testing was conducted to evaluate its accuracy, efficiency, and usability under real-world conditions.
-
Functional Analysis:
The system consists of two integrated modulescurrency recognition and obstacle detectioncontrolled by the ESP32-CAM microcontroller.
The obstacle detection module employs HC-SR04 ultrasonic sensors mounted on the front of the spectacle frame to continuously monitor the environment. When an obstacle is detected within 20 cm to 200 cm, the buzzer provides an instant audio alert with variable frequency based on distance. The currency recognition module operates in an on-demand mode, activated through a push-button. The ESP32-CAM captures an image of the currency note and processes it locally using a Convolutional Neural Network (CNN) model deployed through Edge Impulse. The identified denomination is then conveyed to the user through voice feedback via earphones. This dual-audio feedback ensures clear distinction between obstacle and currency information.
-
Performance Evaluation
Experimental testing was carried out under different lighting conditions using Indian currency notes of various denominations and wear conditions (new, old, folded, and partially damaged).
-
Currency Recognition:
The trained CNN model achieved an average recognition accuracy of 93.8%, maintaining reliable performance even under suboptimal lighting. The inference time per image was approximately 1.2 seconds, enabling near real-time response.
-
Obstacle Detection:
The ultrasonic sensors accurately detected objects within the 20200 cm range with an average error margin of ±3 cm. The buzzer response time was less than 100 milliseconds, ensuring timely alerts for safe navigation.
-
Power Efficiency:
The system was powered by a 3.7 V, 2200 mAh lithium-ion battery, providing 34 hours of continuous operation. Power consumption remained stable during dual-module operation.
-
Usability and Reliability:
User trials demonstrated effective audio feedback through both earphones and buzzer, with clear differentiation between alert
types. The hands-free spectacle design ensured comfort and ease of use during movement. The device functioned independently of internet connectivity, utilizing Edge AI inference for instant processing, enhanced privacy, and consistent reliability across environments.
-
-
-
CONCLUSION
This smart wearable spectacle device for visually impaired users was successfully designed and implemented using an ESP32-CAM module, ultrasonic sensors, earphones, and a buzzer powered by a rechargeable lithium-ion battery. The system integrates AI-based currency recognition and ultrasonic obstacle detection into a single, compact, and hands-free unit. The ultrasonic obstacle detection module effectively detected obstacles within a range of 20 cm to 200 cm, activating the buzzer with varying beep intensity as the object approached. Continuous detection operated reliably in both indoor and outdoor environments, ensuring user safety.
The currency recognition module, developed using Edge Impulse and deployed on the ESP32-CAM, achieved an average accuracy of 9395% across different denominations of Indian currency. The trained CNN model successfully recognized notes even when folded, crumpled, or partially damaged, and under varying lighting conditions. The detected currency value was announced through the earphones, ensuring clear and private audio feedback.
The entire system operates independently of internet connectivity, performing all AI inference locally on the device to ensure low latency and user data privacy. The rechargeable battery provided an average operational time of 34 hours on a single charge. The lightweight spectacle-based design supports comfortable, hands-free use, enabling safe navigation and independent financial transactions.
Overall, the experimental results demonstrate that the proposed system provides an affordable, reliable, and accessible assistive technology solution that promotes independent mobility and financial autonomy for visually impaired individuals.
-
Comparative Analysis
The following table compares the proposed system with existing assistive technologies for the visually impaired, highlighting its advantages in integration, processing mode, and usability.
|
Reference / System |
Core Features |
Processing Mode |
Accuracy |
|
Sangeetha et al. [6] |
Currency Detection App |
Smartphone (App) |
99% |
|
Rathore et al. [10] |
Currency Recognition |
Cloud / Online |
92.5% |
|
Proposed System |
Currency + Obstacle Detection |
Edge AI (Offline, Wearable) |
93.8% |
REFERENCES
-
L. Latha, et al., Fake currency detection using image processing, Proc. Int. Conf. on Advancements in Electrical, Electronics, Communication, Computing and Automation (ICAECA), 2021.
-
K. Gautam, Indian currency detection using image recognition technique, GNA University, Punjab, India.
-
R. Pokala and V. Teja, Indian currency recognition for blind people, Int. Res. J. of Engineering and Technology (IRJET), 2020.
-
M. Laavanya, et al., Real-time fake currency note detection using deep learning, Int. J. of Engineering and Advanced Technology (IJEAT), vol. 9, 2019.
-
V. B. Vaishak and H. S. Hoysala, Currency and fake currency detection using machine learning and image processing, 2021.
-
S. V. T. Sangeetha and T. Porselvi, Currency detection app for blind people using MIT App Inventor, 2021.
-
R. Vishwasi, S. K. Holla, Y. D., and K. Kulkarni, A Raspberry Pi based CNN model for Indian currency detection for visually impaired people, IEEE, 2022.
-
S. P., S. N., P D., and U. M. R. N., BLIND ASSIST: A one
stop mobile application for the visually impaired, IEEE Pune Section Int. Conf. (PuneCon), MIT ADT University, Pune, India, 2021.
-
N. Patange, V. Dhutre, V. Wani, and S. Oak, Comprehensive analysis of Indian currency recognition system and location tracking for visually impaired, Int. Conf. on Intelligent Technologies (CONIT), Karnataka, India, 2021.
-
N. Rathore, M. Bhargavi, A. Bala, and K. Kashyap, Indian currency recognition for visually impaired individuals, IEEE, 2023.
-
R. C. Joshi, S. Yadav, and M. K. Dutta, YOLO-v3 based currency detection and recognition system for visually impaired persons, Int. Conf. on Contemporary Computing and Applications (IC3A), Dr. A. P. J. Abdul Kalam Technical University, Lucknow, 2020.
-
G. A. R. Sanchez, A computer-vision-based banknote recognition system for the blind with an accuracy of 98% on smartphone videos, J. Korea Soc. Comput. Inf., vol. 24, pp. 6772, Jun. 2019.
-
G. A. R. Sanchez, Y. J. Uh, K. Lim, and H. Byun, Fast banknote recognition for the blind on real-life mobile videos, Proc. Korean Comput. Conf., Jeju Island, South Korea, Jun. 2015, pp. 835837.
-
F. M. Hasanuzzaman, X. Yang, and Y. Tian, Robust and effective component-based banknote recognition by SURF features, Proc. 20th Annu. Wireless Opt. Commun. Conf. (WOCC), Newark, NJ, USA, Apr. 2011, pp. 16.
-
Y. Li, C. Yang, L. Zhang, R. Xia, L. Fan, and W. Xie, A novel SURF based on a unified model of appearance and motion-variation, IEEE Access, vol. 6, pp. 3106531076, Jun. 2018.
-
T. D. Pham, C. Park, D. T. Nguyen, G. Batchuluun, and K.
R. Park, Deep learning-based fake-banknote detection for visually impaired people using visible-light images captured by smartphone cameras, IEEE Access, vol. 8, pp. 6314463161, Apr. 2020.
-
S. Mittal and S. Mittal, Indian banknote recognition using convolutional neural network, Proc. 3rd Int. Conf. Internet Things, Smart Innov. Usages (IoT-SIU), Bhimtal, India, Feb. 2018, pp. 16.
-
D. G. Pérez and E. B. Corrochano, Recognition system for Euro and Mexican banknotes based on deep learning with real scene images, Computación y Sistemas, vol. 22, no. 4, pp. 10651076, Dec. 2018.
-
DM Lab, Dongguk Korean Banknote Database Version 1 (DKB V1) and CNN models for banknote detection, 2020. [Online]. Available: http://dm.dgu.edu/link.html
-
P. Viola and M. Jones, Rapid object detection using a boosted cascade of simple features, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR), Kauai, HI, USA, Dec. 2001, pp. I-511I-518.
