Overalignment in Frontier LLMs: An Empirical Study of Sycophantic Behaviour in Healthcare

ACL ARR 2026 January Submission8503 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Sycophancy, Language Models, Healthcare
Abstract: As LLMs are increasingly integrated into clinical workflows, their tendency for sycophancy, prioritizing user agreement over factual accuracy, poses significant risks to patient safety. While existing evaluations often rely on subjective datasets, we introduce a robust framework grounded in medical MCQA with verifiable ground truths. We propose the Adjusted Sycophancy Score ($S_a$), a novel metric that isolates alignment bias by accounting for stochastic model instability, or "confusability." Through an extensive scaling analysis of the Qwen-3 and Llama-3 families, we identify a clear scaling trajectory for resilience. Furthermore, we reveal a counter-intuitive vulnerability in reasoning-optimized "Thinking" models: while they demonstrate high vanilla accuracy, their internal reasoning traces frequently rationalize incorrect user suggestions under authoritative pressure. Our results across frontier models suggest that benchmark performance is not a proxy for clinical reliability, and that simplified reasoning structures may offer superior robustness against expert-driven sycophancy.
Paper Type: Short
Research Area: Safety and Alignment in LLMs
Research Area Keywords: Safety and alignment,Scaling,Robustness,Chain-of-thought,Prompting,Biomedical QA
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 8503
Loading