Digital Repository

Music therapy through facial expression recognition

Show simple item record

dc.contributor.author Mendis, Chanuka
dc.date.accessioned 2024-04-24T08:48:29Z
dc.date.available 2024-04-24T08:48:29Z
dc.date.issued 2023
dc.identifier.citation Mendis, Chanuka (2023) Music therapy through facial expression recognition. BSc. Dissertation, Informatics Institute of Technology en_US
dc.identifier.issn 2019425
dc.identifier.uri http://dlib.iit.ac.lk/xmlui/handle/123456789/2047
dc.description.abstract "Music therapy is a better approach to treating emotional and mental illness now days, and it has been shown to be effective in improving the current mood of a person. However, traditional music libraries do not consider the user's emotional state, and their recommendations may not be personalized or effective. As another better solution for this matter, we developed a digital music therapy application that uses facial expression recognition to predict the user's valence score and suggest songs that can improve their current mood based on their emotional state. The author has used two separate models for emotion and valence prediction. The emotion prediction model was developed using a Convolutional Neural Network (CNN) architecture, which was trained and tested with the FER-2013 dataset. The model was trained to identify seven different emotions, including anger, disgust, fear, happiness, sadness, surprise, and neutral. The ImageDataGenerator class in Keras, was used for pre-processing and augmenting the data. The valence prediction model was also developed using a CNN architecture and was trained on the AffectNet dataset. The regression model was trained to predict the valence score of a user's emotional state, which ranges from -1 to 1. The mean squared error loss function and the Adam optimizer is used in the model. In the benchmarking comparison between the two models, the sequential model achieved an accuracy of 60%, while the transfer learning model with ResNet pre-trained model achieved an accuracy of 65%. The transfer learning model outperformed the sequential model by 5 percentage points, indicating that leveraging the pre-trained ResNet model significantly improved the model's performance." en_US
dc.language.iso en en_US
dc.subject Linear regression en_US
dc.subject Music Therapy en_US
dc.subject Valence en_US
dc.title Music therapy through facial expression recognition en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Advanced Search

Browse

My Account