How does information access affect LLM monitors' ability to detect sabotage?

ICLR 2026 Conference Submission21703 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI safety, LLM agents, Chain-of-thought monitoring, AI control, Automated oversight
TL;DR: We find that LLMs monitoring other LLMs for misaligned behavior sometimes perform better when given less information, and we study the conditions under which this occurs.
Abstract: Frontier language model agents can exhibit misaligned behaviors, including deception, exploiting reward hacks, and pursuing hidden objectives. To control potentially misaligned agents, we can use LLMs themselves to monitor for misbehavior. In this paper, we study how information access affects LLM monitor performance. While one might expect that these monitors perform better when they have access to more of the monitored agents’ reasoning and actions, we find that contemporary monitors often perform better with less information, a phenomenon we call the less-is-more effect for automated monitors. We demonstrate this phenomenon, and analyze the conditions under which it occurs, in three evaluation environments where agents must conduct sabotage while evading monitors. Motivated by the less-is-more effect, we introduce a new extract-and-evaluate monitoring technique that improves sabotage detection rates in the BigCodeBench-sabotage environment by up to 12 percentage points over the next-best monitor without increasing the false positive rate. In addition, we find that agents that are unaware they are being monitored can be caught much more easily and that longer reasoning traces can be difficult for monitors to parse without information filtering. Lastly, we conduct a cost-performance analysis and find that Gemini and Claude models with monitoring techniques that involve information filtering occupy much of the Pareto frontier.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 21703
Loading