Are Large Reasoning Models Interruptible?

15 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: large reasoning models, robustness, dynamic context, interruption
TL;DR: This paper demonstrates that large reasoning models, can lose up to 60% accuracy when subjected to interruptions and in-flight context updates, due to reasoning leakage, self-doubt, and panic.
Abstract: Large Reasoning Models (LRMs) excel at complex reasoning but are traditionally evaluated in static, "frozen world" settings: model responses are assumed to be instantaneous, and the context of a request is presumed to be immutable over the duration of the response. While generally true for short-term tasks, the "frozen world" assumption breaks down in modern reasoning tasks such as assistive programming, where models may take hours to think through problems and code may change dramatically from the time the model starts thinking to the model's final output. In this work, we challenge the frozen world assumption and evaluate LRM robustness under two realistic dynamic scenarios: interruptions, which test the quality of the model's partial outputs on a limited budget, and dynamic context, which tests model adaptation to in-flight changes. Across mathematics and programming benchmarks that require long-form reasoning, static evaluations consistently overestimate robustness: even state-of-the-art LRMs, which achieve high accuracy in ideal settings, can fail unpredictably when interrupted or exposed to changing contexts, with performance dropping by up to 60% when updates are introduced late in the reasoning trace. Our analysis further reveals several novel failure modes, including _reasoning leakage_, where models fold the reasoning into their final answer when interrupted; _self-doubt_, where performance degrades while incorporating update information; and _panic_, where under time pressure models abandon reasoning entirely and return incorrect answers.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 5429
Loading