Abstract: Reinforcement Learning from Human Feedback (RLHF) is crucial for aligning Large Language Models (LLMs) with human values. However, RLHF has been continuously challenged by its high complexity in implementation and computation consumption, specifically for online sampling-based methods like Proximal Policy Optimization (PPO) and Group Relative Policy Optimization (GRPO). Even with recent simplifications, such as Direct Preference Optimization (DPO) that designs an offline implicit reward learning objective relying on pre-collected preference datasets, the problems of over-fitting and training instability remain hindering the alignment process from the expected optimal performance. To address the existing challenges, we propose a novel simplification of RLHF from the perspective of variational inference, called **V**ariational **A**lignment with **R**e-weighting (**VAR**). Specifically, by directly minimizing the distribution gap between the learning LLM policy and the optimal solution of RLHF, we transform the alignment objective into an offline reward-driven re-weighted supervised fine-tuning (SFT) form, which only requires minor adjustment on the SFT loss to obtain noticeable improvement on training stability and effectiveness. In comprehensive evaluation benchmarks, our objective empowers LLMs to outperform offline alignments, demonstrating superior performance in both helpfulness and harmlessness metrics (avg. $\uparrow7.16\%$ than DPO). Meanwhile, when compared to online sampling methods, our method is also comparable even better while significantly reducing computational overhead and accelerating convergence speed (over $5\times$ faster than GRPO), suggesting our approach as an efficient and effective solution in bridging the gap between efficiency and performance in LLM alignment.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We have revised the manuscript to address all feedbacks from the Action Editor and reviewers. The specific changes are as follows:
* **New Generalization Experiments:** We added results on **AlpacaEval 2.0** and **Arena-Hard 0.1** (Section 4.3, Table 6) to demonstrate the model's out-of-domain conversational capabilities.
* **Partition Function Analysis:** We added **Section 4.6 and Figure 4** to analyze the variance of the partition function estimator with respect to the micro-batch size. We also added an ablation study in **Table 7** comparing our in-batch estimator against a separate -sampled baseline.
* **Computational Overhead Profiling:** We added **Section 4.7 and Table 8**, providing a direct comparison of wall-clock time per epoch, throughput (tokens/s), and peak GPU memory between VAR, DPO, and SFT under identical settings.
**Methodological Clarifications:**
* In **Section 4.1**, we explicitly clarified that the win-rate evaluation utilizes the MT-Bench judging protocol (GPT-4 judge + template) applied to **HHA prompts**, rather than the MT-Bench benchmark itself.
* In **Section 3.3**, we refined the description of DPO instability to focus on imbalanced gradient dynamics rather than just negative weights.
* We clarified the definition of "micro-batch" and the theoretical justification for the exponential weighting function derived from the Bradley-Terry model.
**Formatting and Content:**
* Added **Impact Statement** and Acknowledgment following the Conclusion.
* Updated **Section 5 (Related Work)** to include suggested citations.
* Added directional indicators to all result tables for improved readability.
Code: https://github.com/DuYooho/VAR
Supplementary Material: zip
Assigned Action Editor: ~Jiang_Bian1
Submission Number: 5736
Loading