Beyond the "Truth": Investigating Election Rumors on Truth Social During the 2024 Election
By: Etienne Casanova, R. Michael Alvarez
Potential Business Impact:
Finds how fake news spreads and convinces people.
Large language models (LLMs) offer unprecedented opportunities for analyzing social phenomena at scale. This paper demonstrates the value of LLMs in psychological measurement by (1) compiling the first large-scale dataset of election rumors on a niche alt-tech platform, (2) developing a multistage Rumor Detection Agent that leverages LLMs for high-precision content classification, and (3) quantifying the psychological dynamics of rumor propagation, specifically the "illusory truth effect" in a naturalistic setting. The Rumor Detection Agent combines (i) a synthetic data-augmented, fine-tuned RoBERTa classifier, (ii) precision keyword filtering, and (iii) a two-pass LLM verification pipeline using GPT-4o mini. The findings reveal that sharing probability rises steadily with each additional exposure, providing large-scale empirical evidence for dose-response belief reinforcement in ideologically homogeneous networks. Simulation results further demonstrate rapid contagion effects: nearly one quarter of users become "infected" within just four propagation iterations. Taken together, these results illustrate how LLMs can transform psychological science by enabling the rigorous measurement of belief dynamics and misinformation spread in massive, real-world datasets.
Similar Papers
Simulating Misinformation Propagation in Social Networks using Large Language Models
Social and Information Networks
Finds how fake news spreads and how to stop it.
Simulating Rumor Spreading in Social Networks using LLM Agents
Social and Information Networks
Models show how fake news spreads online.
How LLMs Fail to Support Fact-Checking
Computation and Language
Helps computers spot fake news, but needs improvement.