Rethinking LLM Human Simulation: When a Graph is What You Need
By: Joseph Suh, Suhong Moon, Serina Chang
Potential Business Impact:
Makes computers understand choices like people do.
Large language models (LLMs) are increasingly used to simulate humans, with applications ranging from survey prediction to decision-making. However, are LLMs strictly necessary, or can smaller, domain-grounded models suffice? We identify a large class of simulation problems in which individuals make choices among discrete options, where a graph neural network (GNN) can match or surpass strong LLM baselines despite being three orders of magnitude smaller. We introduce Graph-basEd Models for human Simulation (GEMS), which casts discrete choice simulation tasks as a link prediction problem on graphs, leveraging relational knowledge while incorporating language representations only when needed. Evaluations across three key settings on three simulation datasets show that GEMS achieves comparable or better accuracy than LLMs, with far greater efficiency, interpretability, and transparency, highlighting the promise of graph-based modeling as a lightweight alternative to LLMs for human simulation. Our code is available at https://github.com/schang-lab/gems.
Similar Papers
Integrating LLM in Agent-Based Social Simulation: Opportunities and Challenges
Artificial Intelligence
Lets computer characters act more like real people.
Modeling Hypergraph Using Large Language Models
Social and Information Networks
AI creates realistic data for complex connections.
Less is More: Learning Graph Tasks with Just LLMs
Machine Learning (CS)
Computers learn to solve problems using connected ideas.