Reliable Weak-to-Strong Monitoring of LLM Agents

ICLR 2026 Conference Submission6444 Authors

16 Sept 2025 (modified: 18 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Agent Safety, Chain-of-Thought Monitoring, Large Language Model
TL;DR: This paper introduces a monitor red teaming workflow to stress test systems for detecting covert misbehavior in LLM agents, finding that a well-designed monitor scaffold enables weaker models to oversee strong aware attackers.
Abstract: We stress test monitoring systems for detecting covert misbehavior in LLM agents (e.g., secretly exfiltrating data). We propose a monitor red teaming (MRT) workflow that varies agent and monitor awareness, adversarial evasion strategies, and evaluation across tool-calling (SHADE-Arena) and computer-use (CUA-SHADE-Arena) environments. We benchmark standard monitor scaffoldings and introduce a hybrid hierarchical--sequential design. Our experiments yield three findings. First, agent awareness dominates monitor awareness: agents that know they are monitored substantially degrade detection, while increasing monitor awareness helps less than expected. Second, monitor scaffolding matters: our hybrid design consistently outperforms baselines and enables weaker monitors to oversee stronger agents (a weak-to-strong effect). Third, targeted human oversight is key: escalating only pre-flagged cases improves TPR by 15% at FPR=0.01. Our work positions MRT as a standard workflow for stress-testing oversight, revealing robustness gaps in both LLM- and human-based monitoring. We release code, data, and logs to support further research.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 6444
Loading