Abstract:
"Communication barriers remain a challenge for people who use American Sign Language
(ASL) and those who rely on written text, particularly in Sinhala. This lack of connection
hinders effective interaction and inclusion for the deaf and hard-of-hearing community,
impacting their social, educational, and professional opportunities. Additionally, there has been
limited research focused on developing systems that accurately convert sign language into text
in different languages, posing communication challenges for multicultural deaf communities.
To address this challenge, the SignLinker project utilizes advanced machine learning
techniques, using Convolutional Neural Networks (CNN) for strong feature extraction from
ASL gestures and Long Short-Term Memory (LSTM) networks for sequence modeling. The
system generates accurate translations into Sinhala texts in real-time video stream analysis, and
its meticulous development process, including rigorous testing phases, ensures high accuracy
rates and usability enhancements. This empowers individuals with hearing impairments to
engage more confidently across various contexts.
The system's performance evaluation yielded impressive results, with an average recognition
accuracy exceeding 70% across a diverse dataset of ASL gestures. Precision, recall, and F1
scores for key classes were also consistently high, indicating robustness and reliability in
translating ASL gestures into correct Sinhala texts. These metrics highlight the system's
efficacy in overcoming communication barriers and empowering individuals with hearing
impairments to engage more effectively in various social, educational, and professional
settings."