Right Now, Wrong Then: Non-Stationary Direct Preference Optimization under Preference Drift

ICLR 2025 Conference Submission3195 Authors

23 Sept 2024 (modified: 28 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, fine-tuning, DPO, non-stationarity, preference drift, RLHF
TL;DR: We address the non-stationarity preference drift using exponential reweighting strategy for LLMs.
Abstract: Current Large Language Model (LLM) preference optimization algorithms do not account for temporal preference drift, which can lead to severe misalignment. To address this limitation, we propose an offline fine-tuning algorithm Non-Stationary Direct Preference Optimisation (NS-DPO) which models time-dependent reward functions with a Dynamic Bradley-Terry model. NS-DPO applies exponential weighting, by introducing a discount parameter in the loss function, which pro- portionally focuses learning on more time-relevant datapoints. We theoretically analyse the convergence of NS-DPO, providing upper bounds on the estimation error and regret caused by non-stationary preferences. Finally, we demonstrate the effectiveness of NS-DPO1 for fine-tuning LLMs in scenarios with drifting preferences. By simulating preference drift using popular LLM reward models and datasets accordingly, we show that NS-DPO fine-tuned LLMs remain robust under non-stationarity, significantly outperforming baseline algorithms that ignore temporal preference changes, without sacrificing performance in stationary cases.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3195
Loading