Redistributing Token-Level Rewards from Sequence-Level Feedback

TMLR Paper3883 Authors

08 Jan 2025 (modified: 14 Apr 2025)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Reinforcement learning from human feedback (RLHF) offers a promising approach to aligning large language models (LLMs) with human preferences. Typically, a reward model is trained or supplied to act as a proxy for humans in evaluating generated responses during the reinforcement training phase. However, current reward models operate as sequence-to-one models, allocating a single, sparse, and delayed reward to an entire output sequence. This approach may overlook the significant contributions of individual tokens toward the desired outcome. To address this limitation, we propose a more fine-grained, token-level guidance approach for RL training. Specifically, we introduce RED, a novel REward reDistribition method that evaluates and assigns specific credit to each token using an off-the-shelf reward model. By utilizing these fine-grained rewards, we enhance the model's understanding of language nuances, leading to more precise performance improvements. Notably, our method does not require modifying the reward model or introducing additional training steps, thereby incurring minimal computational costs. Through comprehensive experiments across diverse datasets and tasks, we have validated the effectiveness and superiority of our approach.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: We provide a human evaluation to assess the quality of the token-wise rewards, which can be found in Appendix C. Additionally, we have included an explanation of the assumptions for convergence property in Appendix A.
Assigned Action Editor: ~Amrit_Bedi1
Submission Number: 3883
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview