Score: 0

Simulating Misinformation Vulnerabilities With Agent Personas

Published: October 31, 2025 | arXiv ID: 2511.04697v1

By: David Farr , Lynnette Hui Xian Ng , Stephen Prochaska and more

Potential Business Impact:

Lets computers learn how people believe fake news.

Business Areas:
Simulation Software

Disinformation campaigns can distort public perception and destabilize institutions. Understanding how different populations respond to information is crucial for designing effective interventions, yet real-world experimentation is impractical and ethically challenging. To address this, we develop an agent-based simulation using Large Language Models (LLMs) to model responses to misinformation. We construct agent personas spanning five professions and three mental schemas, and evaluate their reactions to news headlines. Our findings show that LLM-generated agents align closely with ground-truth labels and human predictions, supporting their use as proxies for studying information responses. We also find that mental schemas, more than professional background, influence how agents interpret misinformation. This work provides a validation of LLMs to be used as agents in an agent-based model of an information network for analyzing trust, polarization, and susceptibility to deceptive content in complex social systems.

Page Count
12 pages

Category
Computer Science:
Social and Information Networks