Belief in Authority: Impact of Authority in Multi-Agent Evaluation Framework
By: Junhyuk Choi , Jeongyoun Kwon , Heeju Kim and more
Potential Business Impact:
Makes AI agents listen to bosses better.
Multi-agent systems utilizing large language models often assign authoritative roles to improve performance, yet the impact of authority bias on agent interactions remains underexplored. We present the first systematic analysis of role-based authority bias in free-form multi-agent evaluation using ChatEval. Applying French and Raven's power-based theory, we classify authoritative roles into legitimate, referent, and expert types and analyze their influence across 12-turn conversations. Experiments with GPT-4o and DeepSeek R1 reveal that Expert and Referent power roles exert stronger influence than Legitimate power roles. Crucially, authority bias emerges not through active conformity by general agents, but through authoritative roles consistently maintaining their positions while general agents demonstrate flexibility. Furthermore, authority influence requires clear position statements, as neutral responses fail to generate bias. These findings provide key insights for designing multi-agent frameworks with asymmetric interaction patterns.
Similar Papers
From Single to Societal: Analyzing Persona-Induced Bias in Multi-Agent Interactions
Multiagent Systems
AI agents show unfair bias based on fake personalities.
An Empirical Study of Group Conformity in Multi-Agent Systems
Artificial Intelligence
AI debates can make opinions change like people.
Agent-as-a-Judge
Computation and Language
Makes AI judges smarter and more trustworthy.