Collaboration and Conflict between Humans and Language Models through the Lens of Game Theory
By: Mukul Singh, Arjun Radhakrishna, Sumit Gulwani
Potential Business Impact:
Computers learn to be good teammates.
Language models are increasingly deployed in interactive online environments, from personal chat assistants to domain-specific agents, raising questions about their cooperative and competitive behavior in multi-party settings. While prior work has examined language model decision-making in isolated or short-term game-theoretic contexts, these studies often neglect long-horizon interactions, human-model collaboration, and the evolution of behavioral patterns over time. In this paper, we investigate the dynamics of language model behavior in the iterated prisoner's dilemma (IPD), a classical framework for studying cooperation and conflict. We pit model-based agents against a suite of 240 well-established classical strategies in an Axelrod-style tournament and find that language models achieve performance on par with, and in some cases exceeding, the best-known classical strategies. Behavioral analysis reveals that language models exhibit key properties associated with strong cooperative strategies - niceness, provocability, and generosity while also demonstrating rapid adaptability to changes in opponent strategy mid-game. In controlled "strategy switch" experiments, language models detect and respond to shifts within only a few rounds, rivaling or surpassing human adaptability. These results provide the first systematic characterization of long-term cooperative behaviors in language model agents, offering a foundation for future research into their role in more complex, mixed human-AI social environments.
Similar Papers
Strategic Intelligence in Large Language Models: Evidence from evolutionary Game Theory
Artificial Intelligence
AI learns to play smart games against other AIs.
People Are Highly Cooperative with Large Language Models, Especially When Communication Is Possible or Following Human Interaction
Human-Computer Interaction
Computers can now be trusted to cooperate like people.
When Trust Collides: Decoding Human-LLM Cooperation Dynamics through the Prisoner's Dilemma
Human-Computer Interaction
AI agents change how people play games together.