Strategies of cooperation and defection in five large language models
By: Saptarshi Pal , Abhishek Mallela , Christian Hilbe and more
Potential Business Impact:
Computers learn to be fair in games.
Large language models (LLMs) are increasingly deployed to support human decision-making. This use of LLMs has concerning implications, especially when their prescriptions affect the welfare of others. To gauge how LLMs make social decisions, we explore whether five leading models produce sensible strategies in the repeated prisoner's dilemma, which is the main metaphor of reciprocal cooperation. First, we measure the propensity of LLMs to cooperate in a neutral setting, without using language reminiscent of how this game is usually presented. We record to what extent LLMs implement Nash equilibria or other well-known strategy classes. Thereafter, we explore how LLMs adapt their strategies to changes in parameter values. We vary the game's continuation probability, the payoff values, and whether the total number of rounds is commonly known. We also study the effect of different framings. In each case, we test whether the adaptations of the LLMs are in line with basic intuition, theoretical predictions of evolutionary game theory, and experimental evidence from human participants. While all LLMs perform well in many of the tasks, none of them exhibit full consistency over all tasks. We also conduct tournaments between the inferred LLM strategies and study direct interaction between LLMs in games over ten rounds with a known or unknown last round. Our experiments shed light on how current LLMs instantiate reciprocal cooperation.
Similar Papers
Humans expect rationality and cooperation from LLM opponents in strategic games
General Economics
People play differently against AI than humans.
Beyond Nash Equilibrium: Bounded Rationality of LLMs and humans in Strategic Decision-making
Artificial Intelligence
Computers copy human thinking, but less flexibly.
Will Systems of LLM Agents Cooperate: An Investigation into a Social Dilemma
Multiagent Systems
AI agents learn to work together or compete.