Digital Repository

D-DiTECT Multimodal Depression Analysis System

Show simple item record

dc.contributor.author Silva, Sonal
dc.date.accessioned 2026-03-24T06:05:10Z
dc.date.available 2026-03-24T06:05:10Z
dc.date.issued 2025
dc.identifier.citation Silva, Sonal (2025) D-DiTECT Multimodal Depression Analysis System. BSc. Dissertation, Informatics Institute of Technology en_US
dc.identifier.issn 20200113
dc.identifier.uri http://dlib.iit.ac.lk/xmlui/handle/123456789/3043
dc.description.abstract Depression is a significant public health issue with serious personal and societal consequences. Traditional methods for diagnosing depression, such as self-reported questionnaires and clinical interviews, often rely on subjective assessments, which may not accurately reflect an individual's emotional state. This study proposes a multimodal approach to depression detection, integrating audio, visual, and textual data to improve diagnostic accuracy and facilitate early intervention. To address this challenge, the author developed a Multimodal Depression Detection System (D-DiTECT) that combines facial expression analysis using Convolutional Neural Networks (CNN), audio processing through MFCC-LSTM, and sentiment analysis with a BERT-CNN model. Each modality was designed to capture distinct aspects of depressive symptoms, providing a more comprehensive evaluation. en_US
dc.language.iso en en_US
dc.subject Multimodal Depression Detection en_US
dc.subject Facial Expression Analysis en_US
dc.subject Audio Analysis en_US
dc.title D-DiTECT Multimodal Depression Analysis System en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Advanced Search

Browse

My Account