Abstract: Large language models (LLMs) have presented impressive performance but often lack the flexibility to adapt to human preferences quickly without retraining. Inspired by the recent efforts on test-time scaling, we make the first attempt to propose Test-time Preference Optimization (TPO), a framework that aligns LLM outputs with human preferences during inference, eliminating the need to update model parameters. Instead of relying on purely numerical rewards, TPO translates reward signals into \emph{textual} critiques and uses them as textual rewards to iteratively refine its response. Evaluations on benchmarks covering instruction following, preference alignment, safety, and mathematics reveal that TPO progressively improves alignment with human preferences. Notably, after only a few TPO steps, the initially unaligned Llama-3.1-70B-SFT model can surpass the aligned counterpart, Llama-3.1-70B-Instruct. Furthermore, TPO scales efficiently with both the search width and depth of the inference process. Through case studies, we illustrate how TPO exploits the innate capacity of LLM to interpret and act upon reward signals. Our findings establish TPO as a practical, lightweight alternative for test-time preference optimization, achieving alignment on the fly.
Lay Summary: Large language models (LLMs) like ChatGPT often need retraining to better follow human preferences—a costly and inflexible process. Our method, Test-time Preference Optimization (TPO), sidesteps this by improving responses on the fly, without changing the model itself. TPO works by first generating multiple answers to a question (parallel sampling), then using a separate reward model to pick the best and worst ones. The model reflects on their strengths and weaknesses and rewrites a better version, much like editing an essay based on feedback. Repeating this process just once or twice significantly improves model alignment with human values. TPO combines the breadth of sampling with the depth of iterative revision, enabling models to adapt quickly and intelligently at test time. This makes AI systems both more helpful and more efficient—no retraining required.
Link To Code: https://github.com/yafuly/TPO
Primary Area: Deep Learning->Large Language Models
Keywords: preference alignment, inference-time scaling
Submission Number: 2547
Loading