In Crowd Veritas: Leveraging Human Intelligence To Fight Misinformation
By: Michael Soprano
Potential Business Impact:
Helps computers spot fake news from people's opinions.
The spread of online misinformation poses serious threats to democratic societies. Traditionally, expert fact-checkers verify the truthfulness of information through investigative processes. However, the volume and immediacy of online content present major scalability challenges. Crowdsourcing offers a promising alternative by leveraging non-expert judgments, but it introduces concerns about bias, accuracy, and interpretability. This thesis investigates how human intelligence can be harnessed to assess the truthfulness of online information, focusing on three areas: misinformation assessment, cognitive biases, and automated fact-checking systems. Through large-scale crowdsourcing experiments and statistical modeling, it identifies key factors influencing human judgments and introduces a model for the joint prediction and explanation of truthfulness. The findings show that non-expert judgments often align with expert assessments, particularly when factors such as timing and experience are considered. By deepening our understanding of human judgment and bias in truthfulness assessment, this thesis contributes to the development of more transparent, trustworthy, and interpretable systems for combating misinformation.
Similar Papers
Assessing the Potential of Generative Agents in Crowdsourced Fact-Checking
Computation and Language
Computers check if online stories are true.
Establishing Trust in Crowdsourced Data
Social and Information Networks
Makes online information more trustworthy and fair.
Exploring Content and Social Connections of Fake News with Explainable Text and Graph Learning
Social and Information Networks
Finds fake news by looking at what people share.