Digital Repository

LumoVoice - Voice Assistant for Elementary and Primary Vision-Impaired Students via Mobile Application

Show simple item record

dc.contributor.author Mohamed Jiffry, Fathima Rinoza
dc.date.accessioned 2026-03-24T04:56:25Z
dc.date.available 2026-03-24T04:56:25Z
dc.date.issued 2025
dc.identifier.citation Mohamed Jiffry, Fathima Rinoza (2025) LumoVoice - Voice Assistant for Elementary and Primary Vision-Impaired Students via Mobile Application. BSc. Dissertation, Informatics Institute of Technology en_US
dc.identifier.issn 20191161
dc.identifier.uri http://dlib.iit.ac.lk/xmlui/handle/123456789/3038
dc.description.abstract Approximately one million individuals in Sri Lanka are impacted by vision impairments, a considerable proportion of whom are students and young people. Vision-impaired elementary and primary kids encounter significant obstacles in accessing traditional educational curriculum. Current voice assistants lack the ability to adapt their language and responses for young learners, often overwhelm users with complex information, and fail to provide educational context specifically designed for visually impaired children. This creates a critical gap in accessible, voicebased educational assistance tools that effectively cater to the unique needs of elementary and primary vision-impaired students who require age-appropriate language understanding, cognitive load management, and educationally relevant interactions. LumoVoice tackles these difficulties with an integrated system design comprising mobile application, server, and data layers. The system's foundation is a neural network-driven natural language processing model developed with TensorFlow utilizing a sequential design. This model consists of three dense layers: an input layer with 128 neurons using ReLU activation, a hidden layer with 64 neurons, and an output layer using SoftMax activation to provide probability distributions across intent classes. To avoid overfitting dropout layers with a rate of 0.5 were incorporated between dense layers. The system uses specialized algorithms for text preparation (tokenization and lemmatization), bag-of-words creation, intent classification with a confidence level of 0.25, and response formulation. The components are interconnected using a Flask API that facilitates communication between the mobile application and the backend services. LumoVoice technology has been tested, and the results speak for themselves. During the evaluation, its neural network model demonstrated strong accuracy—86.15% overall—when identifying user intent. It also maintained a precision rate of 85.79%, while recall reached 86.15%, proving its reliability in recognizing and responding to user needs. With its F1-score of 0.84, the system achieves an optimal balance between precision and recall, which translates into fewer mistakes during intent classification. Real-world testing showed the speech recognition model's loss decreased from 7.46 to 3.45 while consistently improving accuracy. Impact holds more value than numerical performance metrics. The performance metrics demonstrate how LumoVoice serves as an accessible learning tool for visually impaired students in elementary and primary education while addressing shortcomings present in existing assistive technologies. en_US
dc.language.iso en en_US
dc.subject Speech Recognition en_US
dc.subject Neural Networks en_US
dc.subject Voice Assistant en_US
dc.title LumoVoice - Voice Assistant for Elementary and Primary Vision-Impaired Students via Mobile Application en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Advanced Search

Browse

My Account