Discovering Preference Optimization Algorithms with and for Large Language Models

Published: 17 Jun 2024, Last Modified: 30 Jun 2024AutoRL@ICML 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Preference optimization, RLHF, Large Language Models
TL;DR: We use LLM's to generate novel RLHF objectives, some of which achieve strong results.
Abstract: Offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs. Typically, preference optimization is approached as an offline supervised learning task using manually-crafted convex loss functions. While these methods offer theoretical insights, they are inherently constrained by human creativity and the vast search space for optimal loss functions remains largely unexplored. We address this by performing LLM-driven objective discovery to automatically discover new state-of-the-art preference optimization algorithms without expert human intervention. Specifically, we iteratively prompt an LLM to propose and implement new preference optimization loss functions based on previously-evaluated performance metrics. This process leads to the discovery of previously-unknown and performant preference optimization algorithms. From this exploration, we introduce Discovered Preference Optimization (DiscoPOP), a novel algorithm that adaptively blends logistic and exponential losses. Experiments demonstrate the state-of-the-art performance of DiscoPOP and its successful transfer to held-out tasks. We provide code at https://anonymous.4open.science/r/neurips2024_discopop/.
Submission Number: 17
Loading