dc.description.abstract |
Personalized music recommendations have become a popular feature in music streaming platforms, as they enhance the user experience by providing tailored content that matches users' tastes and preferences. However, the challenge of accurately capturing users' emotional states has remained a significant issue. Traditional recommendation systems rely on users' explicit feedback, such as ratings or playlists, which may not fully reflect their current emotional state. This is particularly important because music plays an essential role in regulating emotions, and people often seek out music that matches their emotional state. The proposed emotional-based music recommendation system addresses this challenge by utilizing deep learning and image recognition techniques to accurately identify users' emotions based on facial expressions. This approach provides a more nuanced understanding of users' emotional states, which can lead to more accurate and relevant music recommendations. The technical aspect of the project involved designing and training a Convolutional Neural Network (CNN) with multiple layers to accurately identify emotions based on facial expressions. The FER-2013 dataset from Kaggle was used to train the model, which includes seven different emotions: anger, disgust, fear, happiness, sadness, surprise, and neutral. The CNN model was designed to take in an image of a face and predict the most likely emotion from the seven categories. To achieve this, the model underwent several iterations of optimization, where the architecture and parameters were adjusted to achieve the highest accuracy possible. After the emotion was identified, the system queried the Spotify API to recommend songs that match the user's emotional state. The user interface was designed to allow users to input a photo of their face, and the CNN would predict their emotional state, providing a seamless user experience. The proposed emotional-based music recommendation system was evaluated using various data science metrics. The accuracy of emotion detection using the CNN was evaluated using the FER2013 dataset and achieved an accuracy of 65%. While this is a promising result, there is still room for improvement, especially since emotion recognition is a complex and challenging task. The music recommendation system was evaluated using precision, recall, and F1-score, which are commonly used in data science to evaluate classification models. The evaluation resulted in an overall accuracy of 80%, which demonstrates the potential of the proposed system to provide personalized music recommendations based on users' emotions. The evaluation also revealed that some emotions were easier to detect than others, which may require further research to improve the system's accuracy. Overall, the test results suggest that the proposed emotional-based music recommendation system has the potential to significantly enhance the user experience in music streaming platforms. |
en_US |