Guided Speculative Inference for Efficient Test-Time Alignment of LLMs

Published: 26 Jan 2026, Last Modified: 02 Mar 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Test-Time Scaling, LLMs, Large Language Models, Speculative Decoding, Inference, Inference-Time Scaling, Best-of-n, Soft Best-of-n, PRM, Reward Models, Reward Guidance, KL Regularization, GSI
TL;DR: We describe a novel algorithm for test-time scaling that combines ideas from speculative decoding and best-of-n sampling and has provable guarantees.
Abstract: We propose Guided Speculative Inference (GSI), a novel algorithm for efficient reward-guided decoding in large language models. GSI combines soft best-of-$n$ test-time scaling with a reward model $r(x,y)$ and speculative samples from a small auxiliary model $\pi_S(y\mid x)$. We provably approximate both the optimal tilted policy $\pi_{\beta,B}(y\mid x) \propto \pi_B(y\mid x)\exp(\beta\,r(x,y))$ of soft best-of-$n$ under the base model $\pi_B$, as well as the expected reward under the optimal policy. In experiments on reasoning benchmarks (MATH500, OlympiadBench, Minerva Math, MMLU-STEM, GSM8K) and across different model families, our method achieves higher accuracy than standard soft best-of-$n$ with $\pi_S$ and reward-guided speculative decoding (Liao et al., 2025), and in certain settings even outperforms soft best-of-$n$ with $\pi_B$, while reducing end-to-end latency by up to 28%.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 8006
Loading