Check Yourself Before You Wreck Yourself: Selectively Quitting Improves LLM Agent Safety

Published: 23 Sept 2025, Last Modified: 23 Oct 2025RegML 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM Agents, AI Safety, Trustworthy AI, Abstention
TL;DR: Giving LLM Agents an option to quit, with specific instructions on when to quit, significantly improves safety and we emperically show this on the ToolEmu benchmark.
Abstract: As Large Language Model (LLM) agents increasingly operate in complex environments with real-world consequences, their safety becomes critical. While uncertainty quantification is well-studied for single-turn tasks, multi-turn agentic scenarios with real-world tool access present unique challenges where uncertainties and ambiguities compound, leading to severe or catastrophic risks beyond traditional text generation failures. We propose using “quitting” as a simple yet effective behavioral mechanism for LLM agents to recognize and withdraw from situations where they lack confidence. Leveraging the ToolEmu framework, we conduct a systematic evaluation of quitting behavior across 12 state-of-the-art LLMs. Our results demonstrate a highly favorable safety-helpfulness trade-off: agents prompted to quit with explicit instructions improve safety by an average of +0.39 on a 0–3 scale across all models (+0.64 for proprietary models), while maintaining a negligible average decrease of –0.03 in helpfulness. Our analysis demonstrates that simply adding explicit quit instructions proves to be a highly effective safety mechanism that can immediately be deployed in existing agent systems and establishes quitting as an effective first-line defense mechanism for autonomous agents in high-stakes applications.
Submission Number: 66
Loading