Abstract:
"The presented research addresses a significant challenge in the realm of communication: the language barrier faced by deaf and hard-of-hearing individuals. Existing sign language-to-text or speech and vice versa translation tools have made strides toward mitigating this issue; however, a gap remains in facilitating seamless communication among users of different sign languages. This gap, underscored by the variations and nuances specific to regional sign languages, such as American Sign Language (ASL) and British Sign Language (BSL), presents a formidable challenge in achieving effective and inclusive communication.
To tackle this issue, the research proposes the development of a real-time machine learning
powered sign language translation system. This system leverages advanced machine learning techniques, including Deep Neural Networks (DNNs) and Support Vector Machines (SVMs), integrated with the MediaPipe framework for enhanced gesture, hand movement, and facial expression recognition. These technologies are aimed at translating sign language in real-time, thereby bridging the communication gap between individuals using different sign languages. First, some quantitative measurements to confirm the effectiveness of the solution. Confusion metrics and AUC-ROC curves show the accuracy, recall, and overall performance of the model for classification tasks. According to preliminary data, accuracy rates are promising, which suggests that the application may have a positive effect on empowering the deaf community and removing obstacles to communication."