Debate Only When Necessary: Adaptive Multiagent Collaboration for Efficient LLM Reasoning

ACL ARR 2026 January Submission5121 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Natural Language Processing, LLM Agent, Multi-agent Collaboration, Reasoning
Abstract: Multiagent collaboration has emerged as a promising framework for enhancing the reasoning capabilities of large language models (LLMs). Despite improvements in reasoning, the approach introduces substantial computational overhead resulting from iterative agent interactions. Furthermore, engaging in unnecessary debates increases the risk of generating erroneous responses. To address these challenges, we propose Debate Only When Necessary (DOWN), an adaptive multiagent collaboration framework that integrates a deterministic gating module conditioned on the initial response. Debate is activated exclusively for queries that necessitate further deliberation, wherein agents refine their outputs by leveraging peer responses and their associated confidence scores. Evaluations on benchmarks show that DOWN improves efficiency by up to six times while preserving or even outperforming the performance of existing methods. Further analysis indicates that DOWN effectively mitigates the risk of error propagation stemming from the redundant debate process. These findings demonstrate the effectiveness of our approach in delivering high-performance LLM solutions at a lower computational cost.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: LLM/AI agents,Multi-agent collaboration
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 5121
Loading