| dc.description.abstract |
Emotion recognition based on physiological signals faces two major challenges: substantial inter-individual variability and performance degradation when models are transferred from controlled laboratory environments to real-world conditions. These limitations restrict the general applicability of emotion-aware systems, as they often require extensive personalized calibration and exhibit inconsistent behavior across different settings.
To address these challenges, this research proposes a dual-framework approach that combines personalization and domain adaptation. In the first framework, a personalized emotion recognition model achieves an accuracy of 88.16% on the WESAD dataset using only five calibration samples per emotion class, demonstrating effective adaptation to individual differences with minimal data requirements. The second framework investigates cross-dataset generalization through model transfer between the WESAD and K-EmoCon datasets. Results show a 40% reduction in domain gap, with valence classification reaching an accuracy of 80.27%, outperforming arousal recognition, which achieved 63.14%. The stronger transferability observed in valence signals indicates their potential for building more robust and adaptable affective computing systems.
By integrating personalized modeling with domain adaptation, this study advances emotion recognition techniques that balance individual sensitivity with contextual robustness, thereby contributing toward more reliable and deployable emotion-aware technologies. |
en_US |