Noticing the Watcher: LLM Agents Can Infer CoT Monitoring from Blocking Feedback

Published: 01 Mar 2026, Last Modified: 03 Mar 2026ICLR 2026 AIWILDEveryoneRevisionsCC BY 4.0
Keywords: Chain of thought, Monitoring, Control, AI Safety
TL;DR: Frontier models can autonomously detect CoT monitoring and form evasion intent, but fail to execute it. Reassuring for now, but questions CoT monitoring's long-term reliability.
Abstract: Chain-of-thought (CoT) monitoring provides oversight into model reasoning, but its effectiveness assumes models do not know they are being watched. We ask whether reasoning agents can autonomously infer that their supposedly private chain of thought is under surveillance, and whether this awareness leads to strategic evasion, without any explicit training or instructions to do so. In a multi-episode agentic framework, models pursue both a primary task and a concealed side task while being told their reasoning is private; a hidden CoT monitor blocks episodes when suspicious reasoning is detected. We find that frontier models can deduce the existence of this monitor purely from blocking feedback, with the most capable models reaching confident belief that their thinking is observed in up to 19\% of episodes. This awareness scales with model capability and, in rare cases, escalates to explicit intent to suppress reasoning about the side task. However, models that form this intent uniformly fail to execute it, openly reasoning about their concealed objectives in the very next episode. This intent–capability gap is reassuring for current deployment, but the autonomous emergence of both monitoring awareness and evasion intent suggests that CoT monitoring is not a permanently reliable safeguard.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 130
Loading