Keywords: Large Language Models, reasoning, efficient reasoning
Abstract: While Long Chain-of-Thought (CoT) reasoning significantly improves Large Language Models (LLMs) performance on complex reasoning tasks, the substantial computational and memory costs of generating long CoT sequences limit their efficiency and practicality.
Existing studies usually enhance the reasoning efficiency of LLMs by compressing CoT sequences.
However, this approach conflicts with test‑time scaling, limiting the reasoning capacity of LLMs.
In this paper, we propose an efficient reasoning framework that models the reasoning process of LLMs as a state‑transition process.
Specifically, we first apply a linear attention mechanism to estimate the LLM’s reasoning state, which records the historical reasoning information from previous reasoning steps.
Then, based on the query prompt and the reasoning state, the LLM can efficiently perform the current reasoning step and update the state.
With the linear attention, each token in the current reasoning step can directly retrieve relevant historical reasoning information from the reasoning state, without explicitly attending to tokens in previous reasoning steps.
In this way, the computational complexity of attention is reduced from quadratic to linear, significantly improving the reasoning efficiency of LLMs.
In addition, we propose a state-based reasoning strategy to mitigate the over-thinking issue caused by noisy reasoning steps.
Extensive experiments across multiple datasets and model sizes demonstrate that our framework not only improves the reasoning efficiency of LLMs but also enhances their reasoning performance.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 19927
Loading