Keywords: multi-agent system, debate, reasoning
Abstract: Multi-agent debate (MAD) has recently emerged as a promising framework for improving the reasoning performance of large language models (LLMs). Yet, whether LLM agents can genuinely engage in deliberative reasoning—beyond simple ensembling or majority voting—remains unclear. We address this question through a controlled study using the \textit{Knight–Knave–Spy} logic puzzle, which enables precise, step-wise evaluation of debate outcomes and processes under verifiable ground truth. We systematically setup six structural and cognitive factors, including agent team size, composition, confidence visibility, debate order, debate depth, and task difficulty, to disentangle their respective effects on collective reasoning. Our results show that intrinsic reasoning strength and group diversity are the dominant drivers of debate success, while structural parameters such as order or confidence visibility offer limited gains. Beyond outcomes, process-level analyses identify key behavioral patterns: majority pressure suppresses independent correction, effective teams overturn incorrect consensus, and rational, validity-aligned reasoning most strongly predicts improvement. These findings provide valuable insights into \textit{how} and \textit{why} LLM debates succeed or fail, offering guidance for designing interpretable and truth-seeking multi-agent reasoning systems.
Our dataset and code are available at https://anonymous.4open.science/r/ControlMAD-CFE7/README.md.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: multi-agent system, debate, reasoning
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 5998
Loading