Knowing When to Quit: A Principled Framework for Dynamic Abstention in LLM Reasoning
Keywords: selective prediction, abstention, chain-of-thought reasoning, value functions, LLM inference, reinforcement learning, mathematical reasoning
TL;DR: We present the first formal analysis of mid-generation abstention for LLMs, showing that abstaining when the value function falls below a threshold is optimal in idealized settings and outperforms baselines under general conditions.
Abstract: Large language models (LLMs) using chain-of-thought reasoning often waste substantial compute by producing long, incorrect responses. Abstention can mitigate this by withholding outputs unlikely to be correct. While most abstention methods decide to withhold outputs before or after generation, dynamic mid-generation abstention considers early termination of unpromising reasoning traces at each token position. Prior work has explored empirical variants of this idea, but principled guidance for the abstention rule remains lacking. We present a formal analysis of dynamic abstention for LLMs, modeling abstention as an explicit action within a regularized reinforcement learning framework. An abstention reward parameter controls the trade-off between compute and information. We show that abstaining when the value function falls below this reward strictly outperforms natural baselines under general conditions. We further derive a principled and efficient method to approximate the value function. Empirical results on mathematical reasoning tasks support our theory and demonstrate improved selective accuracy over existing methods.
Submission Number: 227
Loading