How large language models judge and influence human cooperation
By: Alexandre S. Pires , Laurens Samson , Sennay Ghebreab and more
Potential Business Impact:
AI judgments change how people cooperate.
Humans increasingly rely on large language models (LLMs) to support decisions in social settings. Previous work suggests that such tools shape people's moral and political judgements. However, the long-term implications of LLM-based social decision-making remain unknown. How will human cooperation be affected when the assessment of social interactions relies on language models? This is a pressing question, as human cooperation is often driven by indirect reciprocity, reputations, and the capacity to judge interactions of others. Here, we assess how state-of-the-art LLMs judge cooperative actions. We provide 21 different LLMs with an extensive set of examples where individuals cooperate -- or refuse cooperating -- in a range of social contexts, and ask how these interactions should be judged. Furthermore, through an evolutionary game-theoretical model, we evaluate cooperation dynamics in populations where the extracted LLM-driven judgements prevail, assessing the long-term impact of LLMs on human prosociality. We observe a remarkable agreement in evaluating cooperation against good opponents. On the other hand, we notice within- and between-model variance when judging cooperation with ill-reputed individuals. We show that the differences revealed between models can significantly impact the prevalence of cooperation. Finally, we test prompts to steer LLM norms, showing that such interventions can shape LLM judgements, particularly through goal-oriented prompts. Our research connects LLM-based advices and long-term social dynamics, and highlights the need to carefully align LLM norms in order to preserve human cooperation.
Similar Papers
People Are Highly Cooperative with Large Language Models, Especially When Communication Is Possible or Following Human Interaction
Human-Computer Interaction
Computers can now be trusted to cooperate like people.
Large language models replicate and predict human cooperation across experiments in game theory
Artificial Intelligence
Makes computers act like people making choices.
Large language models replicate and predict human cooperation across experiments in game theory
Artificial Intelligence
Makes computers act like people in games.