TGDPO: Harnessing Token-Level Reward Guidance for Enhancing Direct Preference Optimization

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent advancements in reinforcement learning from human feedback have shown that utilizing fine-grained token-level reward models can substantially enhance the performance of Proximal Policy Optimization (PPO) in aligning large language models. However, it is challenging to leverage such token-level reward as guidance for Direct Preference Optimization (DPO), since DPO is formulated as a sequence-level bandit problem. To address this challenge, this work decomposes the sequence-level PPO into a sequence of token-level proximal policy optimization problems and then frames the problem of token-level PPO with token-level reward guidance, from which closed-form optimal token-level policy and the corresponding token-level reward can be derived. Using the obtained reward and Bradley-Terry model, this work establishes a framework of computable loss functions with token-level reward guidance for DPO, and proposes a practical reward guidance based on the induced DPO reward. This formulation enables different tokens to exhibit varying degrees of deviation from reference policy based on their respective rewards. Experiment results demonstrate that our method achieves substantial performance improvements over DPO, with win rate gains of up to 7.5 points on MT-Bench, 6.2 points on AlpacaEval 2, and 4.3 points on Arena-Hard. Code is available at https://github.com/dvlab-research/TGDPO.
Lay Summary: Large language models like ChatGPT learn to generate helpful responses by being trained using reinforcement learning from human feedback. A key part of this process involves teaching the model which responses are better, often using numerical rewards. Traditionally, these rewards are given for the entire response, but recent advances in a reinforcement learning algorithm called Proximal Policy Optimization show that giving rewards for each individual word or token can help models learn more effectively. However, this idea doesn’t easily fit with another popular method called Direct Preference Optimization (DPO), which focuses on learning from entire responses. Our work bridges this gap. We developed a way to break down the learning process so that it operates at the token level, allowing them to guide the model’s learning with token-specific feedback. This new approach allows the model to adjust each word it generates based on how good or bad that word is considered. As a result, the model learns to generate better responses overall. Experiments show that our method significantly improves performance over existing preference optimization methods on standard benchmarks used to evaluate how well AI models follow instructions.
Link To Code: https://github.com/dvlab-research/TGDPO
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Models, Preference Optimization
Submission Number: 14203
Loading