Test-Time Alignment of LLMs via Sampling-Based Optimal Control in pre-logit space

ICLR 2026 Conference Submission10984 Authors

18 Sept 2025 (modified: 21 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, Alignment, Control thoery, Importance sampling
TL;DR: We propose a new training-free test-time alignment based on sampling the based model predictive control.
Abstract: Test-time alignment of large language models (LLMs) attracts attention because fine-tuning LLMs requires high computational costs. In this paper, we propose a new test-time alignment method called adaptive importance sampling on pre-logits (AISP) on the basis of the sampling-based model predictive control with the stochastic control input. AISP applies the Gaussian perturbation into pre-logits, which are outputs of the penultimate layer, so as to maximize expected rewards with respect to the mean of the perturbation. We demonstrate that the optimal mean is obtained by importance sampling with sampled rewards. AISP outperforms best-of-n sampling in terms of rewards over the number of used samples and achieves higher rewards than other reward-based test-time alignment methods.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 10984
Loading