Digital Repository

Music Recommendation System Based on Facial Emotion Recognition

Show simple item record

dc.contributor.author Karunarathna, Layantha
dc.date.accessioned 2024-04-19T09:11:58Z
dc.date.available 2024-04-19T09:11:58Z
dc.date.issued 2023
dc.identifier.citation Karunarathna, Layantha (2023) Music Recommendation System Based on Facial Emotion Recognition. BSc. Dissertation, Informatics Institute of Technology en_US
dc.identifier.issn 2019767
dc.identifier.uri http://dlib.iit.ac.lk/xmlui/handle/123456789/2021
dc.description.abstract Personalized music recommendations have become a popular feature in music streaming platforms, as they enhance the user experience by providing tailored content that matches users' tastes and preferences. However, the challenge of accurately capturing users' emotional states has remained a significant issue. Traditional recommendation systems rely on users' explicit feedback, such as ratings or playlists, which may not fully reflect their current emotional state. This is particularly important because music plays an essential role in regulating emotions, and people often seek out music that matches their emotional state. The proposed emotional-based music recommendation system addresses this challenge by utilizing deep learning and image recognition techniques to accurately identify users' emotions based on facial expressions. This approach provides a more nuanced understanding of users' emotional states, which can lead to more accurate and relevant music recommendations. The technical aspect of the project involved designing and training a Convolutional Neural Network (CNN) with multiple layers to accurately identify emotions based on facial expressions. The FER-2013 dataset from Kaggle was used to train the model, which includes seven different emotions: anger, disgust, fear, happiness, sadness, surprise, and neutral. The CNN model was designed to take in an image of a face and predict the most likely emotion from the seven categories. To achieve this, the model underwent several iterations of optimization, where the architecture and parameters were adjusted to achieve the highest accuracy possible. After the emotion was identified, the system queried the Spotify API to recommend songs that match the user's emotional state. The user interface was designed to allow users to input a photo of their face, and the CNN would predict their emotional state, providing a seamless user experience. The proposed emotional-based music recommendation system was evaluated using various data science metrics. The accuracy of emotion detection using the CNN was evaluated using the FER2013 dataset and achieved an accuracy of 65%. While this is a promising result, there is still room for improvement, especially since emotion recognition is a complex and challenging task. The music recommendation system was evaluated using precision, recall, and F1-score, which are commonly used in data science to evaluate classification models. The evaluation resulted in an overall accuracy of 80%, which demonstrates the potential of the proposed system to provide personalized music recommendations based on users' emotions. The evaluation also revealed that some emotions were easier to detect than others, which may require further research to improve the system's accuracy. Overall, the test results suggest that the proposed emotional-based music recommendation system has the potential to significantly enhance the user experience in music streaming platforms. en_US
dc.language.iso en en_US
dc.subject Convolutional Neural Network en_US
dc.subject Image processing en_US
dc.subject Face emotion detection en_US
dc.title Music Recommendation System Based on Facial Emotion Recognition en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Advanced Search

Browse

My Account