Score: 2

Rethinking LLM Human Simulation: When a Graph is What You Need

Published: November 3, 2025 | arXiv ID: 2511.02135v1

By: Joseph Suh, Suhong Moon, Serina Chang

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Makes computers understand choices like people do.

Business Areas:
Simulation Software

Large language models (LLMs) are increasingly used to simulate humans, with applications ranging from survey prediction to decision-making. However, are LLMs strictly necessary, or can smaller, domain-grounded models suffice? We identify a large class of simulation problems in which individuals make choices among discrete options, where a graph neural network (GNN) can match or surpass strong LLM baselines despite being three orders of magnitude smaller. We introduce Graph-basEd Models for human Simulation (GEMS), which casts discrete choice simulation tasks as a link prediction problem on graphs, leveraging relational knowledge while incorporating language representations only when needed. Evaluations across three key settings on three simulation datasets show that GEMS achieves comparable or better accuracy than LLMs, with far greater efficiency, interpretability, and transparency, highlighting the promise of graph-based modeling as a lightweight alternative to LLMs for human simulation. Our code is available at https://github.com/schang-lab/gems.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
33 pages

Category
Computer Science:
Computation and Language