LLM Agents Display Human Biases but Exhibit Distinct Learning Patterns
By: Idan Horowitz, Ori Plonsky
Potential Business Impact:
Computers learn differently than people do.
We investigate the choice patterns of Large Language Models (LLMs) in the context of Decisions from Experience tasks that involve repeated choice and learning from feedback, and compare their behavior to human participants. We find that on the aggregate, LLMs appear to display behavioral biases similar to humans: both exhibit underweighting rare events and correlation effects. However, more nuanced analyses of the choice patterns reveal that this happens for very different reasons. LLMs exhibit strong recency biases, unlike humans, who appear to respond in more sophisticated ways. While these different processes may lead to similar behavior on average, choice patterns contingent on recent events differ vastly between the two groups. Specifically, phenomena such as ``surprise triggers change" and the ``wavy recency effect of rare events" are robustly observed in humans, but entirely absent in LLMs. Our findings provide insights into the limitations of using LLMs to simulate and predict humans in learning environments and highlight the need for refined analyses of their behavior when investigating whether they replicate human decision making tendencies.
Similar Papers
Can Generative AI agents behave like humans? Evidence from laboratory market experiments
General Economics
Computers can now act like people in money games.
Bias-Adjusted LLM Agents for Human-Like Decision-Making via Behavioral Economics
CS and Game Theory
Makes computer minds act more like real people.
Large Language Models are Near-Optimal Decision-Makers with a Non-Human Learning Behavior
Artificial Intelligence
AI makes better choices than people in tests.