Keywords: Efficient reasoning, Large Reasoning Models, Large Language Models
TL;DR: We introduce DTS, a model-agnostic decoding method that sketches the reasoning space to approximate the shortest high-performing reasoning path, enhancing both reasoning performance and efficiency in large reasoning models.
Abstract: Large Reasoning Models (LRMs) demonstrate strong performance on complex reasoning tasks, yet they often suffer from overthinking—producing excessively long chain-of-thought (CoT) traces that increase inference cost and may degrade accuracy. Our analysis reveals a clear anti-correlation between reasoning length and accuracy, where across multiple stochastic decodes, the short reasoning paths consistently achieve the highest correctness, while longer ones accumulate errors and repetitions. These short optimal reasoning paths can be found ideally through full enumeration of the reasoning space. However, the tree-structured reasoning space grows exponentially with sequence length, rendering exhaustive exploration infeasible. To address this, we propose DTS, a model-agnostic decoding framework that sketches the reasoning space by selectively branching at high-entropy tokens and applies early stopping to select the shortest completed reasoning path. This approach approximates the optimal solution that enhances both efficiency and accuracy, without requiring additional training or supervision. Experiments on AIME2024 and AIME2025 datasets with DeepSeek-R1-Distill-Qwen-7B and 1.5B show that DTS improves accuracy by up to 8\%, reduces average reasoning length by 23\%, and decreases repetition frequency by 12\%, demonstrating DTS's ability for scalable and efficient LRM reasoning.
Submission Number: 211
Loading