COMMUNITYNOTES: A Dataset for Exploring the Helpfulness of Fact-Checking Explanations
By: Rui Xing , Preslav Nakov , Timothy Baldwin and more
Potential Business Impact:
Helps users spot fake news faster.
Fact-checking on major platforms, such as X, Meta, and TikTok, is shifting from expert-driven verification to a community-based setup, where users contribute explanatory notes to clarify why a post might be misleading. An important challenge here is determining whether an explanation is helpful for understanding real-world claims and the reasons why, which remains largely underexplored in prior research. In practice, most community notes remain unpublished due to slow community annotation, and the reasons for helpfulness lack clear definitions. To bridge these gaps, we introduce the task of predicting both the helpfulness of explanatory notes and the reason for this. We present COMMUNITYNOTES, a large-scale multilingual dataset of 104k posts with user-provided notes and helpfulness labels. We further propose a framework that automatically generates and improves reason definitions via automatic prompt optimization, and integrate them into prediction. Our experiments show that the optimized definitions can improve both helpfulness and reason prediction. Finally, we show that the helpfulness information are beneficial for existing fact-checking systems.
Similar Papers
References to unbiased sources increase the helpfulness of community fact-checks
Social and Information Networks
Makes online fact-checks more helpful with links.
Commenotes: Synthesizing Organic Comments to Support Community-Based Fact-Checking
Human-Computer Interaction
Makes online posts get checked faster.
Community Notes are Vulnerable to Rater Bias and Manipulation
Social and Information Networks
Fixes social media notes to stop bias.