Beyond Nash Equilibrium: Bounded Rationality of LLMs and humans in Strategic Decision-making
By: Kehan Zheng, Jinfeng Zhou, Hongning Wang
Potential Business Impact:
Computers copy human thinking, but less flexibly.
Large language models are increasingly used in strategic decision-making settings, yet evidence shows that, like humans, they often deviate from full rationality. In this study, we compare LLMs and humans using experimental paradigms directly adapted from behavioral game-theory research. We focus on two well-studied strategic games, Rock-Paper-Scissors and the Prisoner's Dilemma, which are well known for revealing systematic departures from rational play in human subjects. By placing LLMs in identical experimental conditions, we evaluate whether their behaviors exhibit the bounded rationality characteristic of humans. Our findings show that LLMs reproduce familiar human heuristics, such as outcome-based strategy switching and increased cooperation when future interaction is possible, but they apply these rules more rigidly and demonstrate weaker sensitivity to the dynamic changes in the game environment. Model-level analyses reveal distinctive architectural signatures in strategic behavior, and even reasoning models sometimes struggle to find effective strategies in adaptive situations. These results indicate that current LLMs capture only a partial form of human-like bounded rationality and highlight the need for training methods that encourage flexible opponent modeling and stronger context awareness.
Similar Papers
Humans expect rationality and cooperation from LLM opponents in strategic games
General Economics
People play differently against AI than humans.
Comparing Exploration-Exploitation Strategies of LLMs and Humans: Insights from Standard Multi-armed Bandit Tasks
Machine Learning (CS)
Makes computers think more like people.
LLM Trading: Analysis of LLM Agent Behavior in Experimental Asset Markets
Trading & Market Microstructure
Computers don't make fake money bubbles like people.