Evaluating LLMs in Open-Source Games
By: Swadesh Sistla, Max Kleiman-Weiner
Potential Business Impact:
Teaches AI to play games and be helpful.
Large Language Models' (LLMs) programming capabilities enable their participation in open-source games: a game-theoretic setting in which players submit computer programs in lieu of actions. These programs offer numerous advantages, including interpretability, inter-agent transparency, and formal verifiability; additionally, they enable program equilibria, solutions that leverage the transparency of code and are inaccessible within normal-form settings. We evaluate the capabilities of leading open- and closed-weight LLMs to predict and classify program strategies and evaluate features of the approximate program equilibria reached by LLM agents in dyadic and evolutionary settings. We identify the emergence of payoff-maximizing, cooperative, and deceptive strategies, characterize the adaptation of mechanisms within these programs over repeated open-source games, and analyze their comparative evolutionary fitness. We find that open-source games serve as a viable environment to study and steer the emergence of cooperative strategy in multi-agent dilemmas.
Similar Papers
Can LLMs effectively provide game-theoretic-based scenarios for cybersecurity?
Cryptography and Security
Teaches computers to play games safely.
Large language models replicate and predict human cooperation across experiments in game theory
Artificial Intelligence
Makes computers act like people making choices.
Integrating LLM in Agent-Based Social Simulation: Opportunities and Challenges
Artificial Intelligence
Lets computer characters act more like real people.