Playing games with Large language models: Randomness and strategy
By: Alicia Vidler, Toby Walsh
Potential Business Impact:
Computers learn to play games, but not well.
Playing games has a long history of describing intricate interactions in simplified forms. In this paper we explore if large language models (LLMs) can play games, investigating their capabilities for randomisation and strategic adaptation through both simultaneous and sequential game interactions. We focus on GPT-4o-Mini-2024-08-17 and test two games between LLMs: Rock Paper Scissors (RPS) and games of strategy (Prisoners Dilemma PD). LLMs are often described as stochastic parrots, and while they may indeed be parrots, our results suggest that they are not very stochastic in the sense that their outputs - when prompted to be random - are often very biased. Our research reveals that LLMs appear to develop loss aversion strategies in repeated games, with RPS converging to stalemate conditions while PD shows systematic shifts between cooperative and competitive outcomes based on prompt design. We detail programmatic tools for independent agent interactions and the Agentic AI challenges faced in implementation. We show that LLMs can indeed play games, just not very well. These results have implications for the use of LLMs in multi-agent LLM systems and showcase limitations in current approaches to model output for strategic decision-making.
Similar Papers
Beyond Nash Equilibrium: Bounded Rationality of LLMs and humans in Strategic Decision-making
Artificial Intelligence
Computers copy human thinking, but less flexibly.
Strategic Intelligence in Large Language Models: Evidence from evolutionary Game Theory
Artificial Intelligence
AI learns to play smart games against other AIs.
Who is a Better Player: LLM against LLM
Artificial Intelligence
Tests AI's smartness by playing board games.