Adaptive Decoding via Latent Preference Optimization

ICLR 2026 Conference Submission19660 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: temperature, creativity, reasoning, alignment
TL;DR: We propose Adaptive Decoding, a method for dynamically selecting decoding temperatures in language models. Using Latent Preference Optimization, our method for training discrete latent variables, we outperform fixed temperatures across diverse tasks.
Abstract: During language model decoding, it is known that using higher temperature sampling gives more creative responses, while lower temperatures are more factually accurate. However, such models are commonly applied to general instruction following, which involves both creative and fact-seeking tasks, using a single fixed temperature across all examples and tokens. In this work, we introduce Adaptive Decoding, a layer added to the model to select the sampling temperature dynamically at inference time, at either the token or example level, in order to optimize performance. To learn its parameters we introduce Latent Preference Optimization (LPO), a general approach to train discrete latent variables such as choices of temperature. Our method outperforms all fixed decoding temperatures across a range of tasks that require different temperatures, including UltraFeedback, Creative Story Writing, and GSM8K.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 19660
Loading