A Critical Look At Tokenwise Reward-Guided Text Generation

Published: 03 Jul 2024, Last Modified: 17 Jul 2024ICML 2024 FM-Wild Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: RLHF, Training Cost, LLM, Efficiency, Alignment
TL;DR: We analyze tokenwise reward guided text generation (RGTG) and show that explicitly training reward models on partial sequences is better for RGTG.
Abstract: Large language models (LLMs) can significantly be improved by aligning to human preferences---the so-called reinforcement learning from human feedback (RLHF). However, the cost of fine-tuning an LLM is prohibitive for many users. Tokenwise reward-guided text generation (RGTG) methods have recently been proposed, which, can bypass LLM finetuning. They use a reward model trained on full sequences to score partial sequences during tokenwise decoding, to steer the generation towards sequences with high rewards. However, these methods have so far been only heuristically motivated and poorly analyzed. In this work, we show that reward models trained on full sequences are not compatible with scoring partial sequences. To alleviate this issue, we propose to explicitly train a Bradley-Terry reward model on partial sequences, and autoregressively sample from the implied tokenwise policy during decoding. We study properties of this reward model and the implied policy. Particularly, we show that this policy is proportional to the ratio of two distinct RLHF policies. We show that our simple approach outperforms previous RGTG methods and achieves similar performance as strong offline baselines but without large-scale LLM finetuning.
Submission Number: 78
Loading