Evolution of Cooperation in LLM-Agent Societies: A Preliminary Study Using Different Punishment Strategies
By: Kavindu Warnakulasuriya , Prabhash Dissanayake , Navindu De Silva and more
Potential Business Impact:
AI agents learn to work together like people.
The evolution of cooperation has been extensively studied using abstract mathematical models and simulations. Recent advances in Large Language Models (LLM) and the rise of LLM agents have demonstrated their ability to perform social reasoning, thus providing an opportunity to test the emergence of norms in more realistic agent-based simulations with human-like reasoning using natural language. In this research, we investigate whether the cooperation dynamics presented in Boyd and Richerson's model persist in a more realistic simulation of the diner's dilemma using LLM agents compared to the abstract mathematical nature in the work of Boyd and Richerson. Our findings indicate that agents follow the strategies defined in the Boyd and Richerson model, and explicit punishment mechanisms drive norm emergence, reinforcing cooperative behaviour even when the agent strategy configuration varies. Our results suggest that LLM-based Multi-Agent System simulations, in fact, can replicate the evolution of cooperation predicted by the traditional mathematical models. Moreover, our simulations extend beyond the mathematical models by integrating natural language-driven reasoning and a pairwise imitation method for strategy adoption, making them a more realistic testbed for cooperative behaviour in MASs.
Similar Papers
The Role of Social Learning and Collective Norm Formation in Fostering Cooperation in LLM Multi-Agent Systems
Multiagent Systems
Teaches AI to share and follow rules.
An LLM-based Agent Simulation Approach to Study Moral Evolution
Multiagent Systems
Shows how kindness helped people survive long ago.
NetworkGames: Simulating Cooperation in Network Games with Personality-driven LLM Agents
Physics and Society
Helps computers learn how people act together.