Inducing Disagreement in Multi-Agent LLM Executive Teams: Only the Devil’s Advocate Works

30 Jan 2026 (modified: 17 Feb 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Multi-agent large language model (LLM) systems for strategic decision-making suffer from premature convergence, limiting the benefits of multiple perspectives. While several techniques for inducing disagreement have been proposed, no systematic comparison exists—particularly for strategic decisions without objectively correct answers. We compare five prompting techniques across 20 business scenarios with four-agent executive teams (CEO, CFO, CMO, COO), analyzing 480 team decisions and $1{,}920$ individual agent responses. Our key finding is stark: Devil's Advocate assignment achieves $99.2%$ disagreement rates, while baseline conditions show only $48.3%$ disagreement. Critically, "soft" techniques—Strong Role Framing ($61.7%$), Explicit Dissent Instructions ($55.0%$), and their combination ($63.3%$)—are statistically indistinguishable from baseline. Only Devil's Advocate produces significant improvement. We also discover consistent coalition patterns: $80.3%$ of 2-2 splits follow a CEO+CMO versus CFO+COO alignment, suggesting functional perspective differentiation. Analysis of confidence allocations reveals that soft techniques create "nuanced agreement"—agents express lower conviction but reach the same conclusions—while Devil's Advocate produces "inauthentic dissent" where $4.9%$ of agents recommend options they privately rate lower. These findings demonstrate that explicit behavioral assignment ("you must oppose") succeeds where implicit instructions ("think critically") fail, with implications for practitioners designing multi-agent deliberation systems.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Huazheng_Wang1
Submission Number: 7253
Loading