How does information access affect LLM monitors' ability to detect sabotage?

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI safety, LLM agents, Chain-of-thought monitoring, AI control, Automated oversight
TL;DR: We find that LLMs monitoring other LLMs for misaligned behavior sometimes perform better when given less information, and we study the conditions under which this occurs.
Abstract: Frontier language model agents can exhibit misaligned behaviors, including deception, exploiting reward hacks, and pursuing hidden objectives. To control potentially misaligned agents, we can use LLMs themselves to monitor for misbehavior. In this paper, we study how information access affects LLM monitor performance. While one might expect that monitors perform better when they have access to more of the monitored agents’ reasoning and actions, we find that contemporary systems often perform better with less information, a phenomenon we call the less-is-more effect for automated oversight. We demonstrate this phenomenon, and analyze the conditions under which it occurs, in three evaluation environments where agents must conduct sabotage while evading monitors. Motivated by the less-is-more effect, we introduce extract-and-evaluate (EaE) monitoring—a new hierarchical approach where one monitor isolates relevant excerpts from the monitored agent's trajectory and another monitor scores these excerpts. EaE rarely performs worse than other methods and achieves substantial improvements in some settings. For example, in BigCodeBench-sabotage with GPT-4.1-mini as the monitor model, EaE improves sabotage detection rates by 16.8 percentage points over the next-best monitor without increasing the false positive rate. In SHADE-Arena, EaE slightly outperforms the best baseline with a GPT-4.1-mini monitor and slightly trails with a Claude-3.7-Sonnet monitor. In addition, we find that agents unaware of being monitored can be caught much more easily and that monitors scoring longer reasoning traces gain more from information filtering. Lastly, we conduct a cost-performance analysis and find that Gemini and Claude models with monitoring techniques that involve information filtering occupy much of the Pareto frontier.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 21703
Loading