Abstract:
"Users are experiencing a challenge in finding music that corresponds to their emotional
preferences due to the exponential expansion in digital music consumption brought about by the quick improvement of technology. Current music recommendation systems overlook the dynamic and crucial emotional aspect of music consumption, instead depending solely on user preferences and listening history. This restriction causes a rift between consumers and their music collections, which frequently results in irritating and less enjoyable music. The ""Emotion Reader"" project proposes a hybrid approach to music recommendation by integrating Facial Emotion Recognition (FER) and Speech Emotion Recognition (SER) techniques. This system analyzes the user's emotional state through facial expressions and speech, enabling personalized music recommendations that align with their current mood. By combining these two emotion-detection methods, the project aims to enhance the user experience by selecting music that best resonates with their emotional needs, providing a more dynamic and intuitive music recommendation system."