Accelerating Large Language Model Reasoning via Speculative Search

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Tree-search-based reasoning methods have significantly enhanced the reasoning capability of large language models (LLMs) by facilitating the exploration of multiple intermediate reasoning steps, i.e., thoughts. However, these methods suffer from substantial inference latency, as they have to generate numerous reasoning thoughts, severely limiting LLM applicability. To address this challenge, we propose a novel Speculative Search (SpecSearch) framework that significantly accelerates LLM reasoning by optimizing thought generation. Specifically, SpecSearch utilizes a small model to strategically collaborate with a large model at both thought and token levels, efficiently generating high-quality reasoning thoughts. The major pillar of SpecSearch is a novel quality-preserving rejection mechanism, which effectively filters out thoughts whose quality falls below that of the large model's outputs. Moreover, we show that SpecSearch preserves comparable reasoning quality to the large model. Experiments on both the Qwen and Llama models demonstrate that SpecSearch significantly outperforms state-of-the-art approaches, achieving up to 2.12$\times$ speedup with comparable reasoning quality.
Lay Summary: Large language models, like ChatGPT, are great at solving complex problems by thinking through different possible steps — a bit like how a person might work through a puzzle. But for the computer to try out many possible ways of solving a problem, it usually needs to spend a lot of time thinking, which makes these models slow to use. To solve this, we created a new method called Speculative Search (SpecSearch). Our approach speeds up the thinking process by letting a smaller, faster program work together with the larger, smarter model. The small model quickly generates possible steps, and then the large model only spends time checking and keeping the high-quality ones. This way, the system avoids wasting time on ideas that wouldn’t be helpful anyway. Our experiments show that SpecSearch makes language models much faster — over twice as fast in some cases — without losing their ability to reason well. We have shared our code at https://github.com/MIRALab-USTC/LLMReasoning-SpecSearch, so others can use and build on our method for making AI smarter and faster.
Link To Code: https://github.com/MIRALab-USTC/LLMReasoning-SpecSearch
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Model Reasoning, Inference Acceleration, Tree Search, Speculative Execution
Submission Number: 14604
Loading