Community Notes are Vulnerable to Rater Bias and Manipulation
By: Bao Tran Truong , Siqi Wu , Alessandro Flammini and more
Potential Business Impact:
Fixes social media notes to stop bias.
Social media platforms increasingly rely on crowdsourced moderation systems like Community Notes to combat misinformation at scale. However, these systems face challenges from rater bias and potential manipulation, which may undermine their effectiveness. Here we systematically evaluate the Community Notes algorithm using simulated data that models realistic rater and note behaviors, quantifying error rates in publishing helpful versus unhelpful notes. We find that the algorithm suppresses a substantial fraction of genuinely helpful notes and is highly sensitive to rater biases, including polarization and in-group preferences. Moreover, a small minority (5--20\%) of bad raters can strategically suppress targeted helpful notes, effectively censoring reliable information. These findings suggest that while community-driven moderation may offer scalability, its vulnerability to bias and manipulation raises concerns about reliability and trustworthiness, highlighting the need for improved mechanisms to safeguard the integrity of crowdsourced fact-checking.
Similar Papers
Threats to the sustainability of Community Notes on X
Social and Information Networks
Helps online notes get more people to agree.
Timeliness, Consensus, and Composition of the Crowd: Community Notes on X
Social and Information Networks
Helps social media spot fake news better.
Timeliness, Consensus, and Composition of the Crowd: Community Notes on X
Social and Information Networks
Helps social media spot fake news faster.