Abstract: We propose a novel $K$-step return estimation method (called $K$ETCHUP) for Reinforcement Learning(RL)-based knowledge distillation (KD) in text generation tasks. Our idea is to induce a $K$-step return by using the Bellman Optimality Equation for multiple steps. Theoretical analysis shows that this
$K$-step formulation reduces the variance of the gradient estimates, thus leading to improved RL optimization especially when the student model size is large. Empirical evaluation on three text generation tasks demonstrates that our approach yields superior performance in both standard task metrics and large language model (LLM)-based evaluation. These results suggest that our $K$-step return induction offers a promising direction for enhancing RL-based KD in LLM research.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: Knowledge Distillation, Machine Learning
Contribution Types: Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 2462
Loading