Digital Repository

Multimodal Emotion Recognition and Emoji Generation System for Mental Health Monitoring and Emotion Awareness.

Show simple item record

dc.contributor.author Senarathna, Janangi
dc.date.accessioned 2025-06-30T08:20:10Z
dc.date.available 2025-06-30T08:20:10Z
dc.date.issued 2024
dc.identifier.citation Senarathna, Janangi (2024) Multimodal Emotion Recognition and Emoji Generation System for Mental Health Monitoring and Emotion Awareness.. Msc. Dissertation, Informatics Institute of Technology en_US
dc.identifier.issn 20220107
dc.identifier.uri http://dlib.iit.ac.lk/xmlui/handle/123456789/2777
dc.description.abstract "In the realm of mental health monitoring and emotion awareness, the integration of multimodal emotion recognition and emoji generation systems emerges as a promising avenue. This research endeavours to develop a comprehensive framework that harnesses the power of various sensory modalities, including facial expressions, speech patterns, and physiological signals, to accurately detect and analyse individuals' emotional states. The proposed system not only focuses on recognizing emotions but also seeks to bridge the communication gap by introducing a novel emoji generation component. This innovative feature aims to enhance user engagement and self-expression, providing a dynamic and user-friendly interface for individuals to convey their emotional experiences. The project's significance lies in its potential to revolutionize mental health monitoring, offering a non-intrusive and continuous assessment tool. By leveraging machine learning algorithms and deep neural networks, the system aims to achieve high accuracy in emotion recognition across diverse contexts. Furthermore, the emoji generation module aims to empower users by enabling them to articulate their emotions visually, fostering a deeper understanding of one's emotional well-being. According to the preliminary test findings, the face emotion recognition model exhibits an accuracy of 0.8373, signifying that the model accurately classifies 83.73% of instances. In the case of the speech emotion recognition model, the accuracy is recorded at 97%, representing the percentage of correct predictions out of all instances. The precision for the chosen classes is 0.97, denoting the ratio of true positive predictions to all positive predictions." en_US
dc.language.iso en en_US
dc.subject Computer vision en_US
dc.subject Natural Language Processing en_US
dc.subject Machine Learning en_US
dc.title Multimodal Emotion Recognition and Emoji Generation System for Mental Health Monitoring and Emotion Awareness. en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Advanced Search

Browse

My Account