Sign Langauge and Gesture Recognition for Deaf People
Keywords:
Artificial Neural Network (ANN), Assistive technology, Communication accessibility, Digital Image Processing (DIP), Gesture Recognition, Region of Interest (ROI), Sign Language Recognition (SLR), Virtual TalkingAbstract
This paper presents a novel sensorless system for real-time sign language recognition designed to overcome the communication barriers that individuals with hearing and speech impairments face. Unlike traditional sensor-based approaches, this system utilizes a standard web camera and Artificial Neural Networks (ANN) to interpret hand gestures, eliminating the need for cumbersome wearable devices. Digital image processing techniques, including contour detection, region segmentation, and feature extraction, are employed to analyze and classify gestures accurately. The system then translates recognized gestures into corresponding voice and text outputs, enabling seamless communication between hearing and non-hearing individuals.
A key innovation is using ANN, which allows the system to adapt and improve its recognition accuracy over time. Trained on a diverse dataset of sign language gestures, the ANN effectively handles variations in lighting conditions and individual signing styles. By incorporating Region of Interest (ROI) extraction, the system focuses on the relevant gesture area, reducing noise and enhancing recognition efficiency.
This sensorless approach offers significant advantages in accessibility, cost-effectiveness, and user-friendliness. It promotes inclusivity by facilitating communication in various settings, including schools, workplaces, and public spaces, without the limitations of sensor-based systems. This research demonstrates the potential of ANN and digital image processing in developing assistive technologies that empower individuals with disabilities and foster a more inclusive society