Simulating Misinformation Vulnerabilities With Agent Personas
By: David Farr , Lynnette Hui Xian Ng , Stephen Prochaska and more
Potential Business Impact:
Lets computers learn how people believe fake news.
Disinformation campaigns can distort public perception and destabilize institutions. Understanding how different populations respond to information is crucial for designing effective interventions, yet real-world experimentation is impractical and ethically challenging. To address this, we develop an agent-based simulation using Large Language Models (LLMs) to model responses to misinformation. We construct agent personas spanning five professions and three mental schemas, and evaluate their reactions to news headlines. Our findings show that LLM-generated agents align closely with ground-truth labels and human predictions, supporting their use as proxies for studying information responses. We also find that mental schemas, more than professional background, influence how agents interpret misinformation. This work provides a validation of LLMs to be used as agents in an agent-based model of an information network for analyzing trust, polarization, and susceptibility to deceptive content in complex social systems.
Similar Papers
Simulating Misinformation Propagation in Social Networks using Large Language Models
Social and Information Networks
Finds how fake news spreads and how to stop it.
Simulating Online Social Media Conversations on Controversial Topics Using AI Agents Calibrated on Real-World Data
Social and Information Networks
Computers can now pretend to be people online.
Persuasion at Play: Understanding Misinformation Dynamics in Demographic-Aware Human-LLM Interactions
Computation and Language
Helps stop fake news from spreading online.