Score: 0

Bias-Adjusted LLM Agents for Human-Like Decision-Making via Behavioral Economics

Published: August 26, 2025 | arXiv ID: 2508.18600v1

By: Ayato Kitadai, Yusuke Fukasawa, Nariaki Nishino

Potential Business Impact:

Makes computer minds act more like real people.

Business Areas:
Simulation Software

Large language models (LLMs) are increasingly used to simulate human decision-making, but their intrinsic biases often diverge from real human behavior--limiting their ability to reflect population-level diversity. We address this challenge with a persona-based approach that leverages individual-level behavioral data from behavioral economics to adjust model biases. Applying this method to the ultimatum game--a standard but difficult benchmark for LLMs--we observe improved alignment between simulated and empirical behavior, particularly on the responder side. While further refinement of trait representations is needed, our results demonstrate the promise of persona-conditioned LLMs for simulating human-like decision patterns at scale.

Page Count
8 pages

Category
Computer Science:
CS and Game Theory