The Illusion of Rationality: Tacit Bias and Strategic Dominance in Frontier LLM Negotiation Games
By: Manuel S. Ríos , Ruben F. Manrique , Nicanor Quijano and more
Large language models (LLMs) are increasingly being deployed as autonomous agents on behalf of institutions and individuals in economic, political, and social settings that involve negotiation. Yet this trend carries significant risks if their strategic behavior is not well understood. In this work, we revisit the NegotiationArena framework and run controlled simulation experiments on a diverse set of frontier LLMs across three multi turn bargaining games: Buyer Seller, Multi turn Ultimatum, and Resource Exchange. We ask whether improved general reasoning capabilities lead to rational, unbiased, and convergent negotiation strategies. Our results challenge this assumption. We find that models diverge into distinct, model specific strategic equilibria rather than converging to a unified optimal behavior. Moreover, strong numerical and semantic anchoring effects persist: initial offers are highly predictive of final agreements, and models consistently generate biased proposals by collapsing diverse internal valuations into rigid, generic price points. More concerningly, we observe dominance patterns in which some models systematically achieve higher payoffs than their counterparts. These findings underscore an urgent need to develop mechanisms to mitigate these issues before deploying such systems in real-world scenarios.
Similar Papers
LLMs as Strategic Agents: Beliefs, Best Response Behavior, and Emergent Heuristics
Artificial Intelligence
Computers learn to think strategically like people.
Beyond Nash Equilibrium: Bounded Rationality of LLMs and humans in Strategic Decision-making
Artificial Intelligence
Computers copy human thinking, but less flexibly.
Reasoning and Behavioral Equilibria in LLM-Nash Games: From Mindsets to Actions
Artificial Intelligence
Helps AI make smarter choices by thinking differently.