An LLM-based Agent Simulation Approach to Study Moral Evolution
By: Zhou Ziheng , Huacong Tang , Mingjie Bi and more
Potential Business Impact:
Shows how kindness helped people survive long ago.
The evolution of morality presents a puzzle: natural selection should favor self-interest, yet humans developed moral systems promoting altruism. We address this question by introducing a novel Large Language Model (LLM)-based agent simulation framework modeling prehistoric hunter-gatherer societies. This platform is designed to probe diverse questions in social evolution, from survival advantages to inter-group dynamics. To investigate moral evolution, we designed agents with varying moral dispositions based on the Expanding Circle Theory \citep{singer1981expanding}. We evaluated their evolutionary success across a series of simulations and analyzed their decision-making in specially designed moral dilemmas. These experiments reveal how an agent's moral framework, in combination with its cognitive constraints, directly shapes its behavior and determines its evolutionary outcome. Crucially, the emergent patterns echo seminal theories from related domains of social science, providing external validation for the simulations. This work establishes LLM-based simulation as a powerful new paradigm to complement traditional research in evolutionary biology and anthropology, opening new avenues for investigating the complexities of moral and social evolution.
Similar Papers
Evolution of Cooperation in LLM-Agent Societies: A Preliminary Study Using Different Punishment Strategies
Multiagent Systems
AI agents learn to work together like people.
Survival at Any Cost? LLMs and the Choice Between Self-Preservation and Human Harm
Computers and Society
Teaches AI to make fair choices when resources are scarce.
Computational Basis of LLM's Decision Making in Social Simulation
Artificial Intelligence
Changes AI's fairness by adjusting its "personality."