Exploring Content and Social Connections of Fake News with Explainable Text and Graph Learning
By: Vítor N. Lourenço, Aline Paes, and Tillman Weyde
Potential Business Impact:
Helps stop fake news by explaining why.
The global spread of misinformation and concerns about content trustworthiness have driven the development of automated fact-checking systems. Since false information often exploits social media dynamics such as "likes" and user networks to amplify its reach, effective solutions must go beyond content analysis to incorporate these factors. Moreover, simply labelling content as false can be ineffective or even reinforce biases such as automation and confirmation bias. This paper proposes an explainable framework that combines content, social media, and graph-based features to enhance fact-checking. It integrates a misinformation classifier with explainability techniques to deliver complete and interpretable insights supporting classification decisions. Experiments demonstrate that multimodal information improves performance over single modalities, with evaluations conducted on datasets in English, Spanish, and Portuguese. Additionally, the framework's explanations were assessed for interpretability, trustworthiness, and robustness with a novel protocol, showing that it effectively generates human-understandable justifications for its predictions.
Similar Papers
Exploring Content and Social Connections of Fake News with Explainable Text and Graph Learning
Social and Information Networks
Helps spot fake news by checking content and friends.
Exploring Content and Social Connections of Fake News with Explainable Text and Graph Learning
Social and Information Networks
Finds fake news by looking at what people share.
Designing Effective AI Explanations for Misinformation Detection: A Comparative Study of Content, Social, and Combined Explanations
Human-Computer Interaction
Helps computers spot fake news better.