Abstract:
"Music is a universal language that positively impacts human mental health. However, due to
individual differences in music taste, it can be challenging to provide personalized music
recommendations. This is where the DL model CNN-based application comes in, incorporating Face Emotion Recognition and a self-questionnaire to generate accurate and effective playlists based on the user's emotions and situation.
The results show that the model accurately recognized user emotions and generated relevant music recommendations. And also received positive feedback from technical experts. I plan to continue improving the application by expanding the dataset, adding more emotions for the model to recognize, and integrating with other music streaming services. In conclusion, the study offers a unique approach to music recommendations that prioritizes the user's emotional state, providing a more personalized and enjoyable listening experience."