AdaBoN: Adaptive Best-of-$N$ Alignment

ICLR 2026 Conference Submission13155 Authors

18 Sept 2025 (modified: 20 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Test-time Alignment, Best-of-N
TL;DR: We design an efficient test-time allocation policy for Best-of-N alignment
Abstract: Recent advances in test-time alignment methods, such as Best-of-$N$ sampling, offer a simple and effective way to steer language models (LMs) toward preferred behaviors using reward models (RM). However, these approaches can be computationally expensive, especially when applied uniformly across prompts without accounting for differences in alignment difficulty. In this work, we propose a prompt-adaptive strategy for Best-of-$N$ alignment that allocates inference-time compute more efficiently. Motivated by latency concerns, we develop a two-stage algorithm: an initial exploratory phase estimates the reward distribution for each prompt using a small exploration budget, and a second stage adaptively allocates the remaining budget using these estimates. Our method is simple, practical, and compatible with any LM-RM combination. Empirical results on prompts from the AlpacaEval, HH-RLHF, and PKU-SafeRLHF datasets for 12 LM–RM pairs and 50 different batches of prompts show that our adaptive strategy consistently outperforms the uniform allocation with the same inference budget. Moreover, our experiments show that our adaptive strategy remains competitive against uniform allocations with $20\%$ larger inference budgets and even improves in performance as the batch size grows.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 13155
Loading