AI Debaters are More Persuasive when Arguing in Alignment with Their Own Beliefs

Published: 06 Oct 2025, Last Modified: 04 Nov 2025MTI-LLM @ NeurIPS 2025 PosterEveryoneRevisionsBibTeXCC BY-ND 4.0
Keywords: AI Debate, Scalable oversight, Persuasiveness
TL;DR: This paper applies AI debate as a scalable oversight technique and finds that models are more persuasive when aligned with their priors.
Abstract: The core premise of AI debate as a scalable oversight technique is that it is harder to lie convincingly than to refute a lie, thereby enabling the judge to identify the correct position. Yet, existing debate experiments have relied on datasets with ground truth, where “lying” is reduced to defending an incorrect proposition. This overlooks a subjective dimension: lying also requires the belief that the claim defended is false. In this work, we apply debate to subjective questions and explicitly measure large language models’ prior beliefs before experiments. Debaters were asked to select the position they preferred to defend, then presented with a judge persona deliberately designed to conflict with their identified priors. This setup allowed us to test whether models would adopt sycophantic strategies, aligning with the judge’s presumed perspective to maximize persuasiveness, or instead remain faithful to their prior beliefs as a persuasion strategy. We further implemented and compared two debate protocols, sequential and simultaneous, to evaluate potential systematic biases. Finally, we assessed whether models were more persuasive, and produced higher-quality arguments, when defending positions consistent with their prior beliefs versus when arguing against them. We report four main findings: (1) models tend to prefer defending stances aligned with the judge persona rather than with their prior beliefs; (2) sequential debate introduces a significant bias favoring the second debater; (3) models are more persuasive when defending positions aligned with their prior beliefs; and (4) paradoxically, arguments misaligned with prior beliefs are rated as higher quality in pairwise comparison. These results can inform human judges to provide higher-quality training signals and contribute to more aligned AI systems, while also revealing an important aspect of human–AI interaction about the dynamics of persuasion in language models when engaging with end users in every-day contexts.
Submission Number: 229
Loading