Digital Repository

Realtime Sinhala Sign Language Translation System using CNN

Show simple item record

dc.contributor.author Wittatchy, Neja
dc.date.accessioned 2026-04-07T03:44:08Z
dc.date.available 2026-04-07T03:44:08Z
dc.date.issued 2025
dc.identifier.citation Wittatchy, Neja (2025) Realtime Sinhala Sign Language Translation System using CNN. BSc. Dissertation, Informatics Institute of Technology en_US
dc.identifier.issn 20201261
dc.identifier.uri http://dlib.iit.ac.lk/xmlui/handle/123456789/3113
dc.description.abstract Communication between the deaf and hearing communities in Sri Lanka remains a significant challenge due to the limited public understanding and technological support for Sinhala Sign Language (SSL). Most existing assistive tools for sign language translation are restricted to alphabet-level or isolated word-based recognition and often rely on offline processing. These limitations reduce their practical usability in real-world environments such as classrooms, healthcare facilities, public services, and everyday social interactions. As a result, deaf individuals frequently experience misunderstandings, communication barriers, and social exclusion. To address this gap, this research presents the development of a real-time Sinhala Sign Language translation system using a Convolutional Neural Network (CNN). The proposed system focuses on recognizing static SSL hand gestures captured through a camera and translating them into meaningful textual output in real time. A custom dataset of SSL gestures was used to train the model, with preprocessing techniques such as image resizing, normalization, and background noise reduction applied to improve robustness and consistency. The CNN architecture was selected for its ability to automatically extract spatial features and accurately classify complex visual patterns present in hand gestures. The trained model achieved an overall classification accuracy of 92%, demonstrating strong performance in recognizing SSL signs. Evaluation metrics further indicated a false-positive rate of 3% and a false-negative rate of 5%, reflecting a balanced and reliable prediction capability. Additionally, the model recorded an AUC-ROC score of 0.91, highlighting its strong discriminative power across multiple classes. Optimization strategies ensured low-latency processing, making the system suitable for real-time deployment on resource-constrained devices. The results indicate that the proposed system has significant potential to improve accessibility and inclusivity for the deaf community in Sri Lanka by enabling more natural and effective communication with hearing individuals. en_US
dc.language.iso en en_US
dc.subject Sign Language Translation en_US
dc.subject Sinhala Sign Language en_US
dc.subject Real Time Communication en_US
dc.title Realtime Sinhala Sign Language Translation System using CNN en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Advanced Search

Browse

My Account