| dc.contributor.author | Kalu Mudiyanselage, Chenuki Mihinya | |
| dc.date.accessioned | 2026-03-24T08:07:18Z | |
| dc.date.available | 2026-03-24T08:07:18Z | |
| dc.date.issued | 2025 | |
| dc.identifier.citation | Kalu Mudiyanselage, Chenuki Mihinya (2025) Sign Language Learning & Translation with User Interaction. BSc. Dissertation, Informatics Institute of Technology | en_US |
| dc.identifier.issn | 20200203 | |
| dc.identifier.uri | http://dlib.iit.ac.lk/xmlui/handle/123456789/3053 | |
| dc.description.abstract | The barriers to communication between Deaf or Hard-of-Hearing people and the general public continue to be a serious social concern, often due to a lack of use and understanding of sign language. While various studies have focused on static hand gesture detection, many current systems lack real-time responsiveness, interactive learning. This project proposes a real-time recognition system for American Sign Language (ASL) that allows users to upload hand gesture images or carry out live gestures via a webcam. The system predicts individual letters using a Convolutional Neural Network (CNN) and a Bidirectional Long Short-Term Memory (BiLSTM) model, which allows for both spatial and temporal information extraction, enhancing accuracy and understanding. Users can interactively form words from the predicted letters, supporting both communication and self-paced learning. The system also has a simple, user-friendly interface that encourages interaction and accessibility for non-signers. This technique not only improves ASL learning but also helps to close the communication gap in educational and social settings. | en_US |
| dc.language.iso | en | en_US |
| dc.subject | Convolutional Neural Network | en_US |
| dc.subject | Bidirectional Long Short Term Memory | en_US |
| dc.subject | Sign Language | en_US |
| dc.title | Sign Language Learning & Translation with User Interaction | en_US |
| dc.type | Thesis | en_US |