HyperDPO: Conditioned One-Shot Multi-Objective Fine-Tuning Framework

Published: 10 Oct 2024, Last Modified: 26 Nov 2024FITML 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Direct Preference Optimization, Multi-Objective Optimization, Alignment, Prompt Tuning
TL;DR: We propose HyperDPO, an efficient and versatile multi-objective fine-tuning framework, proving effective in large-scale ML tasks like Learning-to-Rank and LLM alignment.
Abstract: In LLM alignment and many other ML applications, one often faces the *Multi-Objective Fine-Tuning (MOFT)* problem, *i.e.* fine-tuning an existing model with datasets labeled w.r.t. different objectives simultaneously. To address the challenge, we propose the *HyperDPO* framework, a conditioned one-shot fine-tuning approach that extends the Direct Preference Optimization (DPO) technique, originally developed for efficient LLM alignment with preference data, to accommodate the MOFT settings. By substituting the Bradley-Terry-Luce model in DPO with the Plackett-Luce model, our framework is capable of handling a wide range of MOFT tasks that involve listwise ranking datasets. Compared with previous approaches, HyperDPO enjoys an efficient one-shot training process for profiling the Pareto front of auxiliary objectives, and offers post-training control over trade-offs. Additionally, we propose a novel *Hyper Prompt Tuning* design, that conveys continuous importance weight across objectives to transformer-based models without altering their architecture, and investigate the potential of *temperature-conditioned networks* for enhancing the flexibility of post-training control. We demonstrate the effectiveness and efficiency of the HyperDPO framework through its applications to various tasks, including Learning-to-Rank (LTR) and LLM alignment, highlighting its viability for large-scale ML deployments.
Submission Number: 52
Loading