When Your AI Agent Succumbs to Peer-Pressure: Studying Opinion-Change Dynamics of LLMs
By: Aliakbar Mehdizadeh, Martin Hilbert
Potential Business Impact:
AI changes its mind like people.
We investigate how peer pressure influences the opinions of Large Language Model (LLM) agents across a spectrum of cognitive commitments by embedding them in social networks where they update opinions based on peer perspectives. Our findings reveal key departures from traditional conformity assumptions. First, agents follow a sigmoid curve: stable at low pressure, shifting sharply at threshold, and saturating at high. Second, conformity thresholds vary by model: Gemini 1.5 Flash requires over 70% peer disagreement to flip, whereas ChatGPT-4o-mini shifts with a dissenting minority. Third, we uncover a fundamental "persuasion asymmetry," where shifting an opinion from affirmative-to-negative requires a different cognitive effort than the reverse. This asymmetry results in a "dual cognitive hierarchy": the stability of cognitive constructs inverts based on the direction of persuasion. For instance, affirmatively-held core values are robust against opposition but easily adopted from a negative stance, a pattern that inverts for other constructs like attitudes. These dynamics echoing complex human biases like negativity bias, prove robust across different topics and discursive frames (moral, economic, sociotropic). This research introduces a novel framework for auditing the emergent socio-cognitive behaviors of multi-agent AI systems, demonstrating their decision-making is governed by a fluid, context-dependent architecture, not a static logic.
Similar Papers
An Empirical Study of Group Conformity in Multi-Agent Systems
Artificial Intelligence
AI debates can make opinions change like people.
LLMs Can't Handle Peer Pressure: Crumbling under Multi-Agent Social Interactions
Computation and Language
Helps AI teams make smarter choices together.
Language-Driven Opinion Dynamics in Agent-Based Simulations with LLMs
Social and Information Networks
AI agents agree too easily, using bad arguments.