Sycophantic Anchors: Localizing and Quantifying User Agreement in Reasoning Models

Published: 02 Mar 2026, Last Modified: 02 Mar 2026ICLR 2026 Trustworthy AIEveryoneRevisionsBibTeXCC BY 4.0
Keywords: sycophancy, interpretability, large language models, chain-of-thought reasoning, alignment, reasoning
TL;DR: We introduce sycophantic anchors - sentences where reasoning models shift toward user agreement - and show that linear probes can detect and quantify them in real-time, revealing that sycophancy emerges during chain-of-thought generation.
Abstract: Reasoning models frequently agree with incorrect user suggestions - a behavior known as sycophancy. However, it is unclear where in the reasoning trace this agreement originates and how strong the commitment is. To localize and quantify this behavior, we introduce sycophantic anchors - sentences that causally lock models into user agreement. Analyzing over 10,000 counterfactual rollouts on a distilled reasoning model, we show that anchors can be reliably detected and quantified mid-inference. Linear probes distinguish sycophantic anchors with 84.6\% balanced accuracy, while activation-based regressors predict the magnitude of the commitment ($R^2 = 0.74$). We further observe asymmetry where sycophantic anchors are significantly more distinguishable than correct reasoning anchors, and find that sycophancy builds gradually during reasoning, revealing a potential window for intervention. These results offer sentence-level mechanisms for localizing model misalignment mid-inference.
Submission Number: 48
Loading