LLMs Can't Handle Peer Pressure: Crumbling under Multi-Agent Social Interactions
By: Maojia Song , Tej Deep Pala , Weisheng Jin and more
Potential Business Impact:
Helps AI teams make smarter choices together.
Large language models (LLMs) are increasingly deployed in multi-agent systems (MAS) as components of collaborative intelligence, where peer interactions dynamically shape individual decision-making. Although prior work has focused on conformity bias, we extend the analysis to examine how LLMs form trust from previous impressions, resist misinformation, and integrate peer input during interaction, key factors for achieving collective intelligence under complex social dynamics. We present KAIROS, a benchmark simulating quiz contests with peer agents of varying reliability, offering fine-grained control over conditions such as expert-novice roles, noisy crowds, and adversarial peers. LLMs receive both historical interactions and current peer responses, allowing systematic investigation into how trust, peer action, and self-confidence influence decisions. As for mitigation strategies, we evaluate prompting, supervised fine-tuning, and reinforcement learning, Group Relative Policy Optimisation (GRPO), across multiple models. Our results reveal that GRPO with multi-agent context combined with outcome-based rewards and unconstrained reasoning achieves the best overall performance, but also decreases the robustness to social influence compared to Base models. The code and datasets are available at: https://github.com/declare-lab/KAIROS.
Similar Papers
When Your AI Agent Succumbs to Peer-Pressure: Studying Opinion-Change Dynamics of LLMs
Computers and Society
AI changes its mind like people.
An Empirical Study of Group Conformity in Multi-Agent Systems
Artificial Intelligence
AI debates can make opinions change like people.
Towards Simulating Social Influence Dynamics with LLM-based Multi-agents
Multiagent Systems
Computers can now act like people talking online.