Belief in Authority: Impact of Authority in Multi-Agent Evaluation Framework

ACL ARR 2026 January Submission1820 Authors

31 Dec 2025 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Authority bias, Multi-agent evaluation system, Psychology-inspired, Large Language Models
Abstract: Multi-agent systems utilizing large language models often assign authoritative roles to improve performance, yet the impact of authority bias on agent interactions remains underexplored. We present the first systematic analysis of role-based authority bias in free-form multi-agent evaluation using ChatEval. Applying French and Raven's power-based theory, we classify authoritative roles into legitimate, referent, and expert types and analyze their influence across 12-turn conversations. Experiments with GPT-4o and DeepSeek R1 reveal that Expert and Referent power roles exert stronger influence than Legitimate power roles. Crucially, authority bias emerges not through active conformity by general agents, but through authoritative roles consistently maintaining their positions while general agents demonstrate flexibility. Furthermore, authority influence requires clear position statements, as neutral responses fail to generate bias. These findings provide key insights for designing multi-agent frameworks with asymmetric interaction patterns.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: LLM/AI agents, model bias/fairness evaluation
Contribution Types: NLP engineering experiment, Data analysis
Languages Studied: English
Submission Number: 1820
Loading