Keywords: neuro-symbolic, logical inference, LLM alignemtn
TL;DR: Declarative characterizations of direct preference alignment algorithms
Abstract: Recent direct preference alignment algorithms (DPA), such as DPO, have shown
great promise in aligning large language models to human preferences. While this
has motivated the development of many new variants of the original DPO loss,
understanding the differences between these recent proposals, as well as developing new DPA loss functions, remains difficult given the lack of a technical and
conceptual framework for reasoning about the underlying semantics of these algorithms. In this paper, we attempt to remedy this by formalizing DPA losses
in terms of discrete reasoning problems. Specifically, we ask: Given an existing
DPA loss, can we systematically derive a symbolic expression that characterizes
its semantics? How do the semantics of two losses relate to each other? We propose a novel formalism for characterizing preference losses for single model and
reference model based approaches, and identify symbolic forms for a number of
commonly used DPA variants. Further, we show how this formal view of preference learning sheds new light on both the size and structure of the DPA loss
landscape, making it possible to not only rigorously characterize the relationships
between recent loss proposals but also to systematically explore the landscape and
derive new loss functions from first principles. We hope our framework and findings will help provide useful guidance to those working on human AI alignment.
Is Neurips Submission: No
Submission Number: 91
Loading