DOI : https://doi.org/10.5281/zenodo.18533223
- Open Access
- Authors : Dr. Uma .S, Kavin .S
- Paper ID : IJERTV15IS010742
- Volume & Issue : Volume 15, Issue 01 , January – 2026
- Published (First Online): 09-02-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
An Image-Based Deep Learning Framework for Hand Gesture Sign Language Recognition
Dr. Uma .S
Professor, Department of Computer Science and Engineering Hindusthan College of Engineering and Technology, Coimbatore
Kavin .S
Master of Computer Science And Engineering, Hindusthan College of Engineering and Technology, Coimbatore
Abstract – Hand gesture – based sign language recognition using computer vision has significant potential to enhance communication for individuals with hearing and speech impairments. Sign language is a visual mode of communication that relies on hand gestures, facial expressions, and body movements to convey meaning. Conventional communication methods often require human interpreters or pre-recorded visual aids, which may be time-consuming, costly, and not always available.
This paper presents an image-based deep learning framework for hand gesture sign language recognition, designed to translate sign gestures into corresponding text and speech output. The proposed framework captures hand gesture images through a camera, applies image preprocessing techniques such as grayscale conversion, image enhancement, histogram equalization, and edge detection, and extracts meaningful features from the processed images. These features are then used to train a deep learningbased classification model capable of accurately recognizing sign language gestures.
The recognized gestures are converted into readable text and synthesized speech using text-to-speech technology, enabling effective interaction between hearing-impaired users and the general public. The experimental results demonstrate that the proposed framework achieves reliable recognition accuracy and provides an efficient assistive communication solution for real-world applications such as train ticket booking systems.
Keywords: Hand Gesture Recognition, Sign Language Recognition, Deep Learning, Image Processing, Assistive Technology
-
INTRODUCTION
Effective communication is a fundamental requirement in daily human interaction. However, individuals with hearing and speech impairments often face challenges while communicating with the general population due to the limited understanding of sign language. Although sign language serves as a primary mode of communication for the deaf and hard-of-hearing community, the lack of automated interpretation systems restricts its accessibility in public and service-oriented environments.
Sign language communication is visually expressed through structured hand gestures, facial expressions, and body movements. Interpreting these gestures manually requires trained professionals, which is not always feasible in real-time scenarios such as transportation services, public offices, or customer support environments. To address this limitation, computer-based sign language recognition systems have gained significant research attention in recent years.
Advancements in image processing and deep learning techniques have enabled the development of automated systems capable of analyzing visual gesture patterns with higher accuracy. By
capturing hand gestures through cameras and applying preprocessing techniques such as noise removal and image enhancement, discriminative features can be extracted for effective classification. Deep learning models, particularly convolutional neural networks, have shown promising performance in learning complex visual representations required for gesture recognition.
This project focuses on designing an image processing-based sign language recognition system that converts hand gestures into readable text output. The proposed system aims to support assistive communication applications such as public service access and automated ticket booking for individuals with hearing disabilities. By minimizing human intervention and improving recognition reliability, the system contributes toward inclusive and accessible communication technologies.
-
SIGN LANGUAGE
Sign language is a structured visual communication system primarily used by individuals with hearing and speech impairments. Unlike spoken languages, sign language conveys meaning through coordinated hand gestures, facial expressions, and body movements. These visual components collectively represent words, phrases, and grammatical constructs.
Different regions and communities have developed their own sign languages, each with unique vocabulary and grammatical rules. Examples include American Sign Language (ASL), British Sign Language (BSL), Indian Sign Language (ISL), and several others. These sign languages are not direct visual translations of spoken languages; instead, they function as independent linguistic systems with distinct syntax and semantics.
With the increasing demand for inclusive technologies, automated recognition of sign language has become an important research area. Visual gesture recognition systems aim to bridge the communication gap between sign language users and non-signers by converting gestures into readable or audible outputs. Figure 1.1 illustrates sample hand gestures used for representing alphabets and numerical symbols.
FIGURE 1.1 Hand Gestures for Numbers
-
DIFFERENT TYPES OF SIGN LANGUAGE
Sign languages vary across geographical regions and cultural communities, and there is no single universal sign language used worldwide. Linguistic studies have identified more than a hundred distinct sign languages, each developed to meet the communication needs of specific deaf communities.
Some commonly recognized sign languages include:
-
American Sign Language (ASL)
-
British Sign Language (BSL)
-
Indian and Indo-Pakistani Sign Language
-
Japanese Sign Language
-
Australian Sign Language (Auslan)
-
New Zealand Sign Language
-
-
Although these sign languages differ in structure and vocabulary, they may influence one another through shared gestures or borrowed signs, similar to loanwords in spoken languages. Understanding these variations is essential when designing recognition systems, as gesture patterns and meanings can differ significantly across regions.
-
LANGUAGE BARRIER
A language barrier arises when individuals who speak different languages are unable to communicate effectively due to differences in linguistic structure, pronunciation, or vocabulary. Such barriers are commonly observed in multicultural environments, international collaborations, and service-oriented interactions. When communication is limited, misunderstandings and incorrect interpretations may occur, leading to reduced efficiency and social exclusion.
Historically, visual forms of communication have been used to overcome linguistic differences. In several regions across the world, including North America, Australia, and parts of Africa, gesture-based communication systems enabled interaction between communities with mutually unintelligible spoken languages. These systems relied on commonly understood visual symbols and gestures derived from shared experiences and environments.
In modern society, language barriers continue to affect access to education, employment, and public services. Overcoming these barriers often requires the use of interpreters, translation systems, or assistive technologies. Automated visual communication systems, such as sign language reconition, offer a promising solution by enabling effective interaction without requiring shared spoken language proficiency. Such technologies play a vital role in promoting inclusivity and cross-cultural communication.
-
CHALLENGES FACED BY DEAF AND MUTE
INDIVIDUALS (Rewritten)
Individuals with hearing and speech impairments encounter significant challenges in daily communication and social interaction. Since most public communication systems rely on spoken language, deaf and mute individuals often depend on sign language, lip-reading, or written communication, which may not always be understood or supported in all environments.
Beyond communication difficulties, social misconceptions and lack of awareness can further limit opportunities in education and employment. In some cases, individuals with hearing disabilities are incorrectly perceived as incapable, leading to discrimination and reduced access to resources. Limited availability of assistive
technologies and trained interpreters further intensifies these challenges.
Despite these barriers, many deaf and mute individuals have demonstrated exceptional achievements across various professional fields. Continued technological innovation, combined with increased awareness and advocacy, is essential to ensure equal access to information, services, and opportunities. Automated sign language recognition systems can significantly reduce communication gaps and support independent interaction in public and digital platforms.
-
HAND GESTURES
Hand gestures form an essential component of nonverbal communication and play a significant role in conveying meaning alongside or in place of spoken language. Gestures involve visible movements of the hands, face, or body that communicate information, emotions, or intentions. Their interpretation can vary across cultures, making contextual understanding an important factor in effective communication.
In sign language, hand gestures are not merely supportive cues but structured linguistic elements governed by specific rules and patterns. Unlike casual gestures used during public speaking, sign language gestures follow defined grammatical conventions and convey precise meanings. Research in cognitive psychology has shown that gestures enhance memory recall and comprehension, emphasizing their importance in human communication.
Studies have also demonstrated that gesture usage is a natural human behaviour, observed even among individuals who are visually impaired. This highlights the deep cognitive connection between gestures and language processing. In automated recognition systems, capturing and analyzing hand gesture movements accurately is critical for effective interpretation of sign language.
-
IMAGE PROCESSING (Rewritten & Focused)
Image processing refers to the application of computational techniques to analyze, enhance, and extract meaningful information from digital images. It plays a crucial role in computer vision- based applications by enabling machines to interpret visual data in a structured manner. Common image processing operations include image enhancement, noise reduction, segmentation, and feature extraction.
In gesture recognition systems, image processing techniques are used to preprocess captured images and improve the visibility of important features such as hand contours and motion patterns. Operations such as filtering and contrast enhancement help reduce background noise, while segmentation techniques isolate the region of interest for further analysis. Feature extraction methods then convert visual patterns into numerical representations suitable for classification.
Image processing techniques can be broadly categorized into analog and digital approaches, with digital image processing being more widely adopted due to its flexibility and compatibility with modern computing systems. The effectiveness of gesture recognition systems heavily depends on the accuracy of preprocessing and feature extraction stages.
-
APPLICATIONS OF IMAGE PROCESSING
-
Image processing has extensive applications across various domains due to its ability to analyze and interpret visual data efficiently. In medical imaging, techniques such as segmentation and classification assist in disease diagnosis and treatment planning using X-ray, MRI, and CT images. In surveillance systems, image
processing enables object detection, motion tracking, and facial recognition for enhanced security monitoring.
Robotics relies on image processing for environment perception, object recognition, and navigation. Similarly, in remote sensing, satellite imagery is analyzed to monitor environmental changes, land usage, and disaster prediction. The entertainment industry utilizes image processing for visual effects, animation, and digital content enhancement.
In the context of this project, image processing serves as the foundation for recognizing hand gestures in sign language. By accurately extracting visual features from gesture images, the system enables effective classification and translation of signs into textual output.
CHAPTER 2 LITERATURE SURVEY
Sign language recognition has gained significant attention due to
its potential to reduce communication barriers faced by individuals with hearing and speech impairments. Several researchers have explored image processing and deep learning techniques to recognize static and dynamic hand gestures. This chapter reviews key contributions in the field and identifies the limitations that motivate the proposed work.
-
REVIEW OF EXISTING METHODS
Surthi and Lijiya (2019) presented a deep learning-based framework for recognizing static Indian Sign Language (ISL) alphabets using convolutional neural networks. Their system employed binary hand silhouettes as input and achieved high recognition accuracy for signer-independent classification. While the results demonstrated the effectiveness of CNNs for static gesture recognition, the study focused only on alphabet-level recognition and did not address real-world service-based applications.
Das et al. (2018) proposed a CNN-based approach for recognizing static sign language gestures captured using RGB cameras. Their work utilized the Inception v3 architecture and achieved validation accuracy above 90%. Although the model showed promising performance, the system was limited to controlled datasets and did not explore application-level integration or real-time deployment scenarios.
Likhar et al. (2020) explored both static and dynamic ISL recognition using RGB-D data captured from a Microsoft Kinect sensor. Their approach combined CNNs and convolutional LSTMs to achieve high classification accuracy for both alphabet-level and word-level gestures. However, reliance on specialized hardware such as Kinect limits the practicality and portability of the system for everyday public usage.
Dutta and Bellary (2017) investigated machine learning-based classification techniques for recognizing single-handed and double- handed ISL gestures using MATLAB. Their work demonstrated acceptable accuracy levels; however, traditional machine learning approaches often require handcrafted features and may not generalize well under varying lighting and background conditions.
Bhagat et al. (2019) proposed a real-time gesture recognition system using RGB-D data and convolutional neural networks. Their system achieved high accuracy for both static and dynamic gestures and demonstrated adaptability to American Sign Language through transfer learning. Despite strong performance, the dependency on depth sensors increases system complexity and reduces accessibility for common users.
Suardi et al. (2021) introduced an ensemble CNN-based model for sign language recognition using hand key-point detection. By combining multiple CNN architectures, the system achieved improved accuracy compared to individual models. Although ensemble methods enhance performance, they also increase computational cost, which may affect real-time usability on low- resource systems.
Starner and Pentland (1995) presented one of the earliest works on continuous sign language recognition using Hidden Markov Models for American Sign Language. Their real-time system achieved high word recognition accuracy without explicit finger modeling. While influential, HMM-based approaches are less effective compared to modern deep learning techniques for complex gesture variations.
Vijayalakshmi and Aarthi (2016) developed a sensor-based gesture recognition system using flex sensors and HMM-based text-to- speech conversion. Although effective for limited gestures, hardware-based solutions reduce user convenience and scalability when compared to vision-based systems.
Thylashri et al. (2022) applied convolutional neural networks for recognizing symbolic hand gestures of deaf and mute individuals. Their work demonstrated the applicability of deep learning for gesture recognition; however, the system primarily focused on gesture classification without addressing assistive service applications.
Gupta et al. (2022) proposed a CNN-based system for converting hand gestures into text, incorporating emotion recognition to capture non-manual features. The model achieved high accuracy on American Sign Language datasets, but its applicability to Indian Sign Language and public service integration remains limited.
Peguda et al. (2022) explored speech-to-sign language translation for multiple Indian languages using deep learning techniques such as LSTM and MFCC-based speech recognition. While the system addressed reverse translation (speech to sign), it did not focus on sign-to-text conversion required for deaf users in public interactions.
Thanasekhar et al. (2019) developed a real-time ISL recognition system targeted at programming education, recognizing both static and dynamic gestures using CNNs. Although the application demonstrated low latency, it was restricted to a specific domain and limited vocabulary set.
Chilukala and Vadalia (2022) conducted a comprehensive survey on sign language translation methods across multiple languages, highlighting the challenges in dataset availability and real-time performance. Their work emphasized the need for application- oriented systems tailored to specific user needs.
Mistry et al. (2021) proposed a CNN-LSTM-based approach for ISL word recognition, achieving moderate accuracy on sequential image datasets. The results indicated the complexity of word-level recognition and the necessity for improved preprocessing and feature extraction techniques.
-
SUMMARY AND RESEARCH GAP
From the literature review, it is evident that deep learning techniques, particularly convolutional neural networks, have significantly improved the accuracy of sign language recognition systems. Most existing works focus on static alphabet recognition, isolated word recognition, or experimental datasets under controlled environments. Several systems rely on specialized hardware such as depth sensors or wearable devices, limiting their practical deployment.
Moreover, limited attention has been given to integrating sign language recognition into real-world service applications that directly benefit end users. There exists a clear research gap in developing lightweight, vision-based systems that operate using standard cameras and support assistive services.
The novelty of the proposed work lies in designing an image processing and deep learning-based sign language recognition system aimed at facilitating real-world applications, such as train ticket booking assistance for deaf and mute individuals. By converting sign gestures into text and speech output using MATLAB-based image processing techniques, the system seeks to reduce communication barriers and promote inclusive access to public services.
CHAPTER 3 PROPOSED METHODOLOGY
-
MOTIVATION
Communication plays a vital role in accessing public services; however, deaf and mute individuals often face significant challenges due to the lack of awareness of sign language among the general population. These challenges become more severe in public environments such as railway stations, where verbal interaction is the primary mode of communication.
The motivation behind this project is to design a technology- assisted communication system that enables deaf and mute individuals to express their requirements independently without relying on interpreters or third parties. By converting sign language gestures into understandable text and speech, the proposed system aims to reduce communication barriers and promote social inclusion and self-reliance.
-
PROBLEM IDENTIFICATION
Deaf and mute individuals frequently encounter difficulties while performing routine activities that require verbal communication. One such critical activity is obtaining train tickets at railway stations. Ticketing officers generally rely on spoken language, and most are unfamiliar with sign language. As a result, effective communication between the ticketing staff and deaf or mute passengers becomes difficult.
This communication gap often prevents deaf and mute individuals from clearly conveying essential information such as destination, ticket type, or travel preferences. Consequently, they are forced to depend on others, which limits their independence.
The core problem addressed in this work is the absence of an efficient and user-friendly communication medium that can bridge the interaction gap between deaf and mute individuals and the general public in real-time service environments. The proposed system seeks to address this limitation by translating hand gestures into text and voice outputs that can be easily understood by ticketing personnel.
-
PROPOSED SYSTEM
The proposed system employs image processing and deep learning techniques implemented using the MATLAB platform to recognize Indian Sign Language (ISL) hand gestures. The recognized gestures are converted into corresponding text and voice outputs, enabling effective communication.
Initially, a dataset of Indian Sign Language images is created, focusing on commonly used words and district names relevant to train ticket booking. These gesture images are used to train the system so that each sign corresponds to a predefined textual meaning.
The system operates in three main phases: training, recognition, and output generation. During the training phase, the collected gesture images are preprocessed and trained using suitable feature extraction and classification techniques. In the recognition phase, a new input gesture image is processed and compared with the trained dataset. Finally, in the output phase, the recognized gesture is converted into readable text and synthesized speech.
After preprocessing and feature extraction, the processed hand gesture images are used to train a deep learningbased classification model. The model learns discriminative spatial features of hand gestures and establishes a robust mapping between visual patterns and corresponding sign language meanings. This integration of deep learning with image preprocessing enhances recognition accuracy and enables the framework to generalize effectively across different users and environmental conditions.
The overall objective of this system is to provide a simple, vision- based solution that works using standard cameras and does not require specialized hardware or human interpreters.
-
BLOCK DIAGRAM DESCRIPTION
Figure 3.4 illustrates the block diagram of the proposed methodology. The system is divided into three major stages: Training Phase, Recognition Phase, and Output Phase.
Input Image Acquisition
The inut to the system is an image of a hand gesture representing a sign language symbol. For experimental purposes, gesture images are collected from existing datasets and online sources. In real-time implementation, the image will be captured using a camera installed at the ticket counter.
Image Resizing
The captured image is resized to a fixed dimension of 300 Ă— 300 pixels. Standardizing the image size ensures consistency during processing and improves recognition accuracy.
Grayscale Conversion
The resized RGB image is converted into a grayscale image to reduce computational complexity. Grayscale representation simplifies further processing while preserving essential structural information required for gesture recognition.
Image Enhancement
Image enhancement techniques are applied to improve image clarity and highlight important features of the hand gesture. This step helps reduce the impact of lighting variations and background noise, thereby improving feature extraction reliability.
Histogram Equalization
Histogram equalization is used to improve contrast by redistributing pixel intensity values. This process enhances the visibility of gesture contours and improves the systems ability to distinguish between different hand shapes.
Edge Detection
Edge detection techniques are applied to identify the boundaries of the hand gesture. Extracting edge information helps in capturing the shape and structure of the gesture, which plays a crucial role in accurate recognition.
Feature Extraction and Classification
Relevant features are extracted from the processed image and compared with the trained dataset using classification algorithms. This step determines the gesture class corresponding to the input image.
Text and Speech Output
Once the gesture is successfully recognized, the system generates the corresponding text output. The text is further converted into a voice message, enabling ticketing staff or nearby individuals to understand the conveyed information clearly.
-
ADVANTAGES OF THE PROPOSED SYSTEM
-
Reduces dependency on sign language interpreters
-
Enables independent communication for deaf and mute individuals
-
Uses standard camera-based input without specialized hardware
-
Supports real-world service applications such as train ticket booking
-
Provides both text and voice output for better accessibility
-
train, validate, and evaluate the hand gesture recognition model within the proposed framework. In this proposed system, MATLAB is used to perform the following major image processing tasks:
Image Acquisition and Display
Input gesture images are loaded into the MATLAB environment using appropriate image-reading functions. These images are displayed to verify input quality before further processing.
Image Preprocessing
Preprocessing operations are applied to standardize the input image. This includes resizing, grayscale conversion, and noise reduction to improve consistency across different input samples.
Image Enhancement
Image enhancement techniques are applied to improve contrast and clarity. These steps help highlight essential hand gesture features while minimizing background interference.
Feature Extraction
Important visual characteristics such as edges and shape patterns are extracted from the enhanced image. These features are critical for distinguishing between different gestures.
Gesture Classification
The extracted features are compared with trained gesture datasets to identify the closest matching sign. This classification process determines the corresponding textual meaning of the gesture.
Overall, MATLAB provides an integrated platform where image preprocessing, feature extraction, classification, and visualization can be implemented efficiently within a single environment.
CHAPTER 5 RESULTS AND DISCUSSION
CHAPTER 4 SOFTWARE ARCHITECTURE
-
MATLAB ENVIRONMENT
MATLAB is a high-level technical computing platform extensively used for algorithm development, data analysis, and system simulation. In this project, MATLAB serves as the primary development environment for implementing image processing and gesture recognition algorithms due to its strong computational capabilities and extensive library support.
MATLAB offers an interactive workspace that allows rapid prototyping and testing of algorithms. Its matrix-based structure is particularly suitable for image representation, where images are treated as numerical matrices. This makes MATLAB an efficient platform for processing, analyzing, and transforming visual data.
In addition, MATLAB supports visualization tools that help in displaying intermediate and final processing results, which is essential for analyzing gesture recognition performance. The availability of built-in toolboxes further reduces development complexity and improves reliability.
-
IMAGE PROCESSING USING MATLAB
MATLAB provides a dedicated Image Processing Toolbox that supports a wide range of operations required for gesture recognition. MATLABs Deep Learning Toolbox is utilized to
-
COMMON WORD GESTURES
Figure 5.1 illustrates the sign language gestures corresponding to commonly used daily communication words. These words are frequently used by deaf and mute individuals in routine interactions.
Examples of common gestures include:
These gestures were selected to represent essential conversational vocabulary and were included in the training dataset to ensure practical usability of the system.
-
DISTRICT NAME GESTURES
Figure 5.2 presents the hand gesture representations of selected district names. These gestures are particularly useful in the context
of train ticket booking, where passengers must specify their travel destination.
Sample district gestures include:
Each district gesture is mapped to its corresponding textual output during the training phase. This mapping allows the system to recognize destination-related gestures accurately.
-
SYSTEM PERFORMANCE ANALYSIS
The proposed hand gesture recognition system was tested using a dataset containing both common words and district-level gestures. During testing, gesture images were captured and passed through preprocessing stages including grayscale conversion, image enhancement, histogram equalization, and edge detection.
The processed images were then classified based on extracted features. The system successfully identified gestures and generated the corresponding text output, which was further converted into speech.
Experimental results indicate that the system performs reliably under controlled lighting conditions and clear gesture presentation. The accuracy of recognition demonstrates the feasibility of using image processing techniques for real-world service applications such as train ticket booking.
-
RESULT VISUALIZATION
Figure 5.3 shows the output obtained for the input gesture representing the word HELP. The intermediate processing stages and final recognized output are displayed, confirming correct gesture identification.
Figure 5.4 illustrates the recognition process for the district name VIRUDHUNAGAR. The system accurately processes the input gesture and produces the corresponding textual output.
These results validate the effectiveness of the proposed methodology in recognizing both general communication words and location-specific gestures.
CHAPTER 6 CONCLUSION
The development of an image-based deep learning framework
combined wth image preprocessing techniques has the potential to significantly improve communication and accessibility for individuals with hearing disabilities. By integrating deep learning with vision-based hand gesture recognition, the proposed framework effectively bridges the communication gap between deaf and hard-of-hearing individuals and the hearing population. The system achieved consistent recognition accuracy under controlled lighting conditions.
The system focuses on real-world usability, particularly in public service environments such as train ticket booking counters. Experimental evaluation demonstrated that the proposed methodology can accurately recognize predefined gestures and generate meaningful outputs, thereby reducing dependency on human interpreters.
The implementation of this system contributes to enhanced accessibility, independence, and social inclusion for deaf and mute individuals. It also highlights the potential of image processing and assistive technologies in addressing real-life societal challenges.
The project structure was organized systematically: Chapter 1 introduced the problem domain, Chapter 2 reviewed existing literature, Chapter 3 detailed the proposed methodology, Chapter 4 explained the software architecture, and Chapter 5 discussed the results. This chapter concludes the work, while future enhancements are discussed in the next chapter.
CHAPTER 7 FUTURE WORK
Although the proposed system demonstrates promising results,
several enhancements can be explored in future work. The system
can be extended to support a wider range of gestures, including international sign languages, to improve global usability.
Incorporating deep learning techniques such as Convolutional Neural Networks (CNNs) can significantly enhance recognition accuracy and robustness. Training the model with larger and more diverse datasets, including variations in lighting, background, and hand orientation, will further improve performance in real-time environments.
Future versions of the system can be integrated into mobile or web- based applications to enable gesture-based interaction for ticket booking and other public services. Additionally, advanced techniques such as real-time video processing, 3D hand tracking, and multi-camera setups can be explored to improve system reliability.
User-centric evaluations involving deaf and mute individuals can also be conducted to assess usability and identify areas for improvement, ensuring that the system effectively meets real-world needs.
CHAPTER 8 OUTCOMES OF THE PROJECT
The following outcomes were achieved through the successful
completion of the project titled A Proposed framework of Hand Gestures Sign Language Recognition Using Deep Learning.
Program Outcomes (PO) Mapping
PO1 Engineering Knowledge: Applied fundamental concepts of image processing and pattern recognition using MATLAB.
PO2 Problem Analysis: Analyzed communication challenges faced by deaf and mute individuals in public transportation systems.
PO3 Design and Development of Solutions: Designed a gesture recognition system capable of identifying words and district names using image processing techniques.
PO4 Investigation of Complex Problems: Studied real-time gesture recognition challenges and evaluated suitable preprocessing techniques.
PO5 Modern Tool Usage: Utilized MATLAB and its Image Processing Toolbox for system development and testing.
PO6 Engineer and Society: Developed a socially relevant solution aimed at improving accessibility for people with disabilities.
PO7 Environment and Sustainability: Proposed a time-efficient and paperless solution suitable for public transport systems.
PO8 Ethics: Ensured ethical implementation by using licensed software and publicly available datasets.
PO9 Individual and Team Work: Successfully collaborated as a team, integrating ideas and responsibilities effectively.
PO10 Communication: Maintained effective communication through project documentation and presentations.
PO11 Project Management and Finance: Implemented a cost- effective system using readily available tools and resources.
PO12 Life-Long Learning: Enhanced technical knowledge in image processing and MATLAB through practical implementation.
Program Specific Outcomes (PSO)
PSO1: Applied Electronics and Communication Engineering concepts to develop an assistive technology solution using image processing.
PSO2: Utilized modern software tools to solve real-world problems related to accessibility and humancomputer interaction.
REFERENCES
[1.] Sruthi C.J. and Lijiya A., "Signet: A Deep Learning based Indian Sign Language Recognition System," 2019 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 2019, pp. 0596-0600,doi: 10.1109/ICCSP.2019.8698006. [2.] A.Das,S. Gawde, K. Suratwala and D. Kalbande, "Sign Language Recognition Using Deep Learning on Custom Processed Static Gesture Images," 2018 International Conference on Smart City and Emerging Technology(ICSCET),Mumbai,India,2018,pp.1- 6,doi:10.1109/ICSCET.2018.8537248. [3.] P. Likhar, N. K. Bhagat and R. G N, "Deep Learning Methods for Indian Sign Language Recognition," 2020 IEEE 10th International Conference on Consumer Electronics (ICCE-Berlin), Berlin, Germany, 2020, pp. 1-6, doi: 10.1109/ICCE Berlin50680.2020.9352194. [4.] P. Mistry, V. Jotaniya, P. Patel, N. Patel and M. Hasan, "Indian Sign Language Recognition using Deep Learning," 2021 International Conference on Artificial Intelligence and Machine Vision (AIMV), Gandhinagar, India,2021,pp.16,doi:10.1109/AIMV53313.2021.9670933.
[5.] M. R. Chilukala and V. Vadalia, "A Report on Translating Sign Language to English Language," 2022 International Conference on Electronics and Renewable Systems (ICEARS), Tuticorin, India, 2022, pp. 1849-1854,doi:10.1109/ICEARS53579.2022.9751846. [6.] K. K. Dutta and S. A. S. Bellary, "Machine Learning Techniques for Indian Sign Language Recognition," 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC), Mysore, India, 2017,pp.333-336,doi: 10.1109/CTCEEC.2017.8454988. [7.] P. Sharma and R. S. Anand, "Deep models and optimizers for Indian sign language recognition," 11th International Conference of Pattern Recognition Systems (ICPRS2021), OnlineConference, 2021,pp.217- 222, doi:10.1049/icp.2021.1445. [8.] N. K. Bhagat, Y. Vishnusai and G. N. Rathna, "Indian Sign Language Gesture Recognition using Image Processing and Deep Learning," 2019 Digital Image Computing: Techniques and Applications (DICTA), Perth, WA, Australia,2019,pp.18,doi:10.1109/DICTA47822.2019.8945850. [9.] G. A. Rao, K. Syamala, P. V. V. Kishore and A. S. C. S. Sastry, "Deep convolutional neural networks for sign language recognition," 2018 Conference on Signal Processing And Communication Engineering Systems (SPACES), Vijayawada,India,2018,pp.194-197, doi:10.1109/SPACES.2018.8316344. [10.] B. Thanasekhar, G. Deepak Kumar, V. Akshay and A. M. Ashfaaq, "Real Time Conversion of Sign Language using Deep Learning for Programming Basics," 2019 11th International Conference on Advanced Computing [11.] A. Das, S. Gawde, K. Suratwala and D. Kalbande, "Sign Language Recognition Using Deep Learning on Custom Processed Static Gesture Images," 2018 International Conference on Smart City and Emerging Technology (ICSCET), Mumbai,India,2018,pp.1-6, doi:10.1109/ICSCET.2018.8537248. [12.] P. K. Datta, A. Biswas, A. Ghosh and N. Chaudhury, "Creation of Image Segmentation Classifiers for Sign Language Processing for Deaf and Dumb," 2020 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and FutureDirections) (ICRITO), Noida, India, 2020, pp.772-775, doi: 10.1109/ICRITO48877.2020.9197978.
[13.] M. Safeel, T. Sukumar, S. K. S, A. M. D, S. R and P. S. B, "Sign Language Recognition Techniques- A Review," 2020 IEEE International Conference for Innovation in Technology (INOCON), Bengaluru, India, 2020, pp. 1-9, doi:10.1109/INOCON50539.2020.9298376 [14.] C. Suardi, A. N. Handayani, R. A. Asmara, A. P. Wibawa, L. N. Hayati and H. Azis, "Design of Sign Language Recognition Using E- CNN," 2021 3rd East Indonesia Conference on Computer and Information Technology(EIConCIT),Surabaya,Indonesia,2021,pp.166-170, doi:10.1109/EIConCIT50028.2021.9431877.
[15.] Mahesh Kumar N B " Conversion of Sign Language into Text," International Journal of Applied Engineering Research ISSN 0973- 4562 Volume 13, Number 9 (2018) pp. 7154-7161
