More Than Just Warnings:Exploring the Ways of Communicating Credibility Assessment on Social Media
By: Huiyun Tang , Björn Rohles , Yuwei Chuai and more
Potential Business Impact:
Helps people spot fake news better online.
Reducing the spread of misinformation is challenging. AI-based fact verification systems offer a promising solution by addressing the high costs and slow pace of traditional fact-checking. However, the problem of how to effectively communicate the results to users remains unsolved. Warning labels may seem an easy solution, but they fail to account for fuzzy misinformation that is not entirely fake. Additionally, users' limited attention spans and social media information should be taken into account while designing the presentation. The online experiment (n = 537) investigates the impact of sources and granularity on users' perception of information veracity and the system's usefulness and trustworthiness. Findings show that fine-grained indicators enhance nuanced opinions, information awareness, and the intention to use fact-checking systems. Source differences had minimal impact on opinions and perceptions, except for informativeness. Qualitative findings suggest the proposed indicators promote critical thinking. We discuss implications for designing concise, user-friendly AI fact-checking feedback.
Similar Papers
Exploring Content and Social Connections of Fake News with Explainable Text and Graph Learning
Social and Information Networks
Finds fake news by looking at what people share.
Evaluation Metrics for Misinformation Warning Interventions: Challenges and Prospects
Human-Computer Interaction
Helps stop fake news by checking how warnings work.
Labeling Synthetic Content: User Perceptions of Warning Label Designs for AI-generated Content on Social Media
Human-Computer Interaction
Helps people spot fake online videos and pictures.