Can Reasoning Models Obfuscate Reasoning? Stress-Testing Chain-of-Thought Monitorability

ICLR 2026 Conference Submission13990 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI Safety, CoT Monitoring, AI Control, Steganography, Evaluations, LLMs, Reasoning Models, Monitorability, Obfuscation
TL;DR: We stress test chain‑of‑thought (CoT) monitorability with a composable obfuscation‑pressure taxonomy and show some reasoning models can complete hidden objectives while evading detection.
Abstract: Recent findings suggest that misaligned models may exhibit deceptive behavior, raising concerns about output trustworthiness. Chain-of-thought (CoT) is a promising tool for alignment monitoring: when models articulate their reasoning faithfully, monitors can detect and mitigate harmful behaviors before undesirable outcomes occur. However, a key uncertainty is: \textbf{Can models obfuscate their CoT in order to pursue hidden adversarial objectives while evading detection?} To answer this question and thus stress test CoT monitorability, we develop a composable and quantifiable taxonomy of prompt-level obfuscation cues to systematically elicit CoT obfuscation. We evaluate both internal CoT (reasoning traces) and external CoT (prompted reasoning in outputs) using toy tasks and more realistic environments in SHADE-Arena. We show that: (i) CoT monitoring performs accurately and efficiently without obfuscation pressure. (ii) Under strong obfuscation pressure, some models successfully complete adversarial tasks while evading detection, though this effect is model-dependent. (iii) Internal CoT is more resilient to obfuscation pressure than external CoT. These results suggest that while CoT provides valuable oversight in benign settings, robust deployment requires model-specific stress-testing of monitorability.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 13990
Loading