Optimizing Language Models for Human Preferences is a Causal Inference Problem

Published: 26 Apr 2024, Last Modified: 15 Jul 2024UAI 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: causal inference, optimization, large language models, doubly robust
TL;DR: We propose that optimizing LLMs for human preferences should be viewed as a causal inference problem. Drawing on importance weighting and double robustness principles, we present two methods that solve unbiased surrogate objectives for this problem.
Abstract: As large language models (LLMs) see greater use in academic and commercial settings, there is increasing interest in methods that allow language models to generate texts aligned with human preferences. In this paper, we present an initial exploration of language model optimization for human preferences from *direct outcome datasets*, where each sample consists of a text and an associated numerical outcome measuring the reader's response. We first propose that language model optimization should be viewed as a *causal problem* to ensure that the model correctly learns the relationship between the text and the outcome. We formalize this causal language optimization problem, and we develop a method—*causal preference optimization* (CPO)—that solves an unbiased surrogate objective for the problem. We further extend CPO with *doubly robust* CPO (DR-CPO), which reduces the variance of the surrogate objective while retaining provably strong guarantees on bias. Finally, we empirically demonstrate the effectiveness of (DR-)CPO in optimizing state-of-the-art LLMs for human preferences on direct outcome data, and we validate the robustness of DR-CPO under difficult confounding conditions.
Supplementary Material: zip
List Of Authors: Lin, Victoria and Ben-Michael, Eli and Morency, Louis-Philippe
Latex Source Code: zip
Signed License Agreement: pdf
Code Url: https://github.com/torylin/causal-preference-optimization
Submission Number: 626
Loading