Bias-Adjusted LLM Agents for Human-Like Decision-Making via Behavioral Economics
By: Ayato Kitadai, Yusuke Fukasawa, Nariaki Nishino
Potential Business Impact:
Makes computer minds act more like real people.
Large language models (LLMs) are increasingly used to simulate human decision-making, but their intrinsic biases often diverge from real human behavior--limiting their ability to reflect population-level diversity. We address this challenge with a persona-based approach that leverages individual-level behavioral data from behavioral economics to adjust model biases. Applying this method to the ultimatum game--a standard but difficult benchmark for LLMs--we observe improved alignment between simulated and empirical behavior, particularly on the responder side. While further refinement of trait representations is needed, our results demonstrate the promise of persona-conditioned LLMs for simulating human-like decision patterns at scale.
Similar Papers
How Far Can LLMs Emulate Human Behavior?: A Strategic Analysis via the Buy-and-Sell Negotiation Game
Artificial Intelligence
Teaches computers to negotiate like people.
Computational Basis of LLM's Decision Making in Social Simulation
Artificial Intelligence
Changes AI's fairness by adjusting its "personality."
From Single to Societal: Analyzing Persona-Induced Bias in Multi-Agent Interactions
Multiagent Systems
AI agents show unfair bias based on fake personalities.