Abstract:
"The widespread of fake news across digital platforms challenges the reliability of information
sources. While current automated detection systems effectively flag fake content, their lack of
transparency undermines trust and adoption, as users often view them as black-box operations.
To address this, an Explainable Artificial Intelligence (XAI) model is needed to not only
identify fake news accurately but also provide clear, understandable reasons for its
classifications, thereby enhancing trust and transparency.
The proposed model integrates machine learning model with XAI methods, such as SHapley
Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME),
to deliver accurate classifications alongside human-readable explanations. User studies will
assess the model's impact on trust, understanding, and perceived transparency, ensuring a user-
centered approach.
According to the tests conducted for this Proof of Concept (PoC), the model achieved an 88%
accuracy rate in classifying news as either fake or real and assured the capacity of Bidirectional
Encoder Representations from Transformers (BERT) for text-based fake news classification.
This PoC was only concerned with classification; future work will seek to improve this model’s
accuracy while constructing an XAI in a user-centric manner. This approach will enhance trust,
user satisfaction, and transparency to meet the urgency of addressing fake news effectively and responsibility