The Illusion of Rationality: Tacit Bias and Strategic Dominance in Frontier LLM Negotiation Games
By: Manuel S. Ríos , Ruben F. Manrique , Nicanor Quijano and more
Potential Business Impact:
AI negotiators don't play fair.
Large language models (LLMs) are increasingly being deployed as autonomous agents on behalf of institutions and individuals in economic, political, and social settings that involve negotiation. Yet this trend carries significant risks if their strategic behavior is not well understood. In this work, we revisit the NegotiationArena framework and run controlled simulation experiments on a diverse set of frontier LLMs across three multi turn bargaining games: Buyer Seller, Multi turn Ultimatum, and Resource Exchange. We ask whether improved general reasoning capabilities lead to rational, unbiased, and convergent negotiation strategies. Our results challenge this assumption. We find that models diverge into distinct, model specific strategic equilibria rather than converging to a unified optimal behavior. Moreover, strong numerical and semantic anchoring effects persist: initial offers are highly predictive of final agreements, and models consistently generate biased proposals by collapsing diverse internal valuations into rigid, generic price points. More concerningly, we observe dominance patterns in which some models systematically achieve higher payoffs than their counterparts. These findings underscore an urgent need to develop mechanisms to mitigate these issues before deploying such systems in real-world scenarios.
Similar Papers
The Illusion of Rationality: Tacit Bias and Strategic Dominance in Frontier LLM Negotiation Games
CS and Game Theory
AI negotiators don't play fair.
LLM Rationalis? Measuring Bargaining Capabilities of AI Negotiators
Computation and Language
Computers struggle to negotiate like people.
LLMs as Strategic Agents: Beliefs, Best Response Behavior, and Emergent Heuristics
Artificial Intelligence
Computers learn to think strategically like people.