Digital Repository

Explainable AI Model for Text-Based Fake News Detection: A User-Centric Approach to Enhance Trust and Transparency

Show simple item record

dc.contributor.author De Saram, Shamal
dc.date.accessioned 2026-03-10T04:23:56Z
dc.date.available 2026-03-10T04:23:56Z
dc.date.issued 2025
dc.identifier.citation De Saram, Shamal (2025) Explainable AI Model for Text-Based Fake News Detection: A User-Centric Approach to Enhance Trust and Transparency. Msc. Dissertation, Informatics Institute of Technology en_US
dc.identifier.issn 20200040
dc.identifier.uri http://dlib.iit.ac.lk/xmlui/handle/123456789/2887
dc.description.abstract "The widespread of fake news across digital platforms challenges the reliability of information sources. While current automated detection systems effectively flag fake content, their lack of transparency undermines trust and adoption, as users often view them as black-box operations. To address this, an Explainable Artificial Intelligence (XAI) model is needed to not only identify fake news accurately but also provide clear, understandable reasons for its classifications, thereby enhancing trust and transparency. The proposed model integrates machine learning model with XAI methods, such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), to deliver accurate classifications alongside human-readable explanations. User studies will assess the model's impact on trust, understanding, and perceived transparency, ensuring a user- centered approach. According to the tests conducted for this Proof of Concept (PoC), the model achieved an 88% accuracy rate in classifying news as either fake or real and assured the capacity of Bidirectional Encoder Representations from Transformers (BERT) for text-based fake news classification. This PoC was only concerned with classification; future work will seek to improve this model’s accuracy while constructing an XAI in a user-centric manner. This approach will enhance trust, user satisfaction, and transparency to meet the urgency of addressing fake news effectively and responsibility en_US
dc.language.iso en en_US
dc.subject Explainable AI en_US
dc.subject User Centric Approach en_US
dc.subject Explainable Artificial Intelligence (XAI) en_US
dc.title Explainable AI Model for Text-Based Fake News Detection: A User-Centric Approach to Enhance Trust and Transparency en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Advanced Search

Browse

My Account