LLM Social Simulations Are a Promising Research Method
By: Jacy Reese Anthis , Ryan Liu , Sean M. Richardson and more
Potential Business Impact:
AI can now pretend to be people for science.
Accurate and verifiable large language model (LLM) simulations of human research subjects promise an accessible data source for understanding human behavior and training new AI systems. However, results to date have been limited, and few social scientists have adopted this method. In this position paper, we argue that the promise of LLM social simulations can be achieved by addressing five tractable challenges. We ground our argument in a review of empirical comparisons between LLMs and human research subjects, commentaries on the topic, and related work. We identify promising directions, including context-rich prompting and fine-tuning with social science datasets. We believe that LLM social simulations can already be used for pilot and exploratory studies, and more widespread use may soon be possible with rapidly advancing LLM capabilities. Researchers should prioritize developing conceptual models and iterative evaluations to make the best use of new AI systems.
Similar Papers
LLM-based Human Simulations Have Not Yet Been Reliable
Computation and Language
Makes computer guesses about people more truthful.
Social Simulations with Large Language Model Risk Utopian Illusion
Computation and Language
Computers show fake, too-nice people in chats.
Integrating LLM in Agent-Based Social Simulation: Opportunities and Challenges
Artificial Intelligence
Lets computer characters act more like real people.