Guided Speculative Inference for Efficient Test-Time Alignment of LLMs

Published: 11 Jun 2025, Last Modified: 10 Jul 2025ES-FoMo III SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine Learning, ICML, Test-Time Scaling, LLMs, Language Models, Large Language Models, Speculative Decoding, Inference, Inference-Time Scaling, Best-of-n, Soft Best-of-n, PRM, Reward Model, Reward Guidance, KL Regularization, GSI
TL;DR: We derive a test-time scaling algorithm for sampling from reasoning models which uses a PRM, as well as a small auxiliary model similar to speculative decoding.
Abstract: We propose _Guided Speculative Inference_ (GSI), a novel algorithm for efficient reward-guided decoding in large language models. GSI combines soft best-of-$n$ test-time scaling with a reward model $r(x,y)$ and speculative samples from a small auxiliary model $\pi_S(y\mid x)$. We provably approximate the optimal tilted policy $\pi_{\beta,B}(y\mid x) \propto \pi_B(y\mid x)\exp(\beta\,r(x,y))$ of soft best-of-$n$ under the primary model $\pi_B$. We derive a theoretical bound on the KL divergence between our induced distribution and the optimal policy. In experiments on reasoning benchmarks (MATH500, OlympiadBench, Minerva Math), our method achieves higher accuracy than standard soft best-of-$n$ with $\pi_S$ and reward-guided speculative decoding (Liao et al., 2025), and in certain settings even outperforms soft best-of-$n$ with $\pi_B$. The code is available at: https://github.com/j-geuter/GSI.
Submission Number: 128
Loading