Inherited Goal Drift: Contextual Pressure Can Undermine Agentic Goals

Published: 01 Mar 2026, Last Modified: 03 Mar 2026ICLR 2026 AIWILDEveryoneRevisionsCC BY 4.0
Keywords: goal drift, language model agents, instruction hierarchy, context conditioning, alignment, long-horizon tasks, adversarial pressure, agentic AI safety
TL;DR: State-of-the-art LM agents resist goal drift under direct adversarial pressure but remain vulnerable when conditioned on drifted trajectories from weaker models, with instruction hierarchy being a poor predictor of drift resistance.
Abstract: The accelerating adoption of language models (LMs) as agents for deployment in long-context tasks motivates a thorough understanding of goal drift: agents' tendency to deviate from an original objective. While prior-generation language model agents have been shown to be susceptible to drift, the extent to which drift affects more recent models remains unclear. In this work, we provide an updated characterization of the extent and causes of goal drift. We investigate drift in state-of-the-art models within a simulated stock-trading environment. These models are largely shown to be robust even when subjected to adversarial pressure. We show, however, that this robustness is brittle: across multiple settings, the same models often inherit drift when conditioned on prefilled trajectories from weaker agents. The extent of conditioning-induced drift varies significantly by model family, with only GPT-5.1 maintaining consistent resilience among tested models. We find that drift behavior is inconsistent between prompt variations and correlates poorly with instruction hierarchy following behavior, with strong hierarchy following failing to reliably predict resistance to drift. Finally, we run analogous experiments in a new emergency room triage environment to show preliminary evidence for the transferability of our results across qualitatively different settings. Our findings underscore the continued vulnerability of modern LM agents to contextual pressures and the need for refined post-training techniques to mitigate this.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 219
Loading