Abstract:
"Sound and music are a right of humankind, it should be heard, felt and deeply absorbed, the capability to deliver
the ability for everyone to understand and to loose themselves into the realms of music, irrespective of their
disabilities, or forms, in this current frame of time is only possible through the strong arm of computer science.
This research and its contents are aimed to pioneer a new domain where human perception of sound and AI are
weaved together.
This paper examines the role of sound source classification and localization. We examine the current state of
sound classification using artificial intelligence (AI), paying particular attention to methods for estimating
sound source distances.
Through a comprehensive literature review, we identify key challenges and opportunities in this area. We then
present the rationale for developing an environmental detection algorithm based on acoustic segmentation. This
system has the potential to significantly improve the mobility and independence of people with hearing loss by
providing real-time environmental information
the paper identifies the specific research question and existing knowledge gaps that this project aims to address.
We describe in detail the research design that has been implemented and the way we are addressing the
identified challenges. Finally, we present a high-level project timeline with key deliverables for each phase.
This research has the potential to contribute significantly to the build of applications that can improve the quality
of life for people with hearing loss."