Square$\chi$PO: Differentially Private and Robust $\chi^2$-Preference Optimization in Offline Direct Alignment
TL;DR: The new state-of-the-art theoretical guarantees in differentially private and robust offline alignments
Abstract: In this paper, we theoretically study the offline alignment of language models with human preference feedback, under both preference label corruption and privacy protections. To this end, we propose a variant of \texttt{$\chi$PO} -- \texttt{Square}\texttt{$\chi$PO}, which is a simple one-line change of \texttt{$\chi$PO} with the standard log-loss replaced by a new square loss over probability. Thanks to the inherent nice properties of this new loss, we have advanced the state-of-the-art of differentially private and robust alignment. Specifically, for the local model of label privacy, \texttt{Square}\texttt{$\chi$PO} is the first one that attains optimal rate based on single-policy concentrability even with general function approximations. It also gives the first result under the central model of privacy protection over both prompts (responses) and labels. On the robustness side against Huber label corruption, \texttt{Square}\texttt{$\chi$PO} is the first alignment method that has a meaningful theoretical guarantee under general function approximations. More importantly, \texttt{Square}\texttt{$\chi$PO} can address privacy protection and corruption \emph{simultaneously}, where an interesting separation is observed, implying that the order of privacy and corruption matters. Furthermore, we show that \texttt{Square}\texttt{$\chi$PO} can also be easily extended to handle the scenario of the general preference model with state-of-the-art guarantees under corruption and privacy. Last but not least, all of our theoretical guarantees enjoy a unified analysis, building upon a new result on the generalization error bounds of least-square regression under corruption and privacy constraints, which we believe is of independent interest to the community.
Lay Summary: Training large language models to align with human preferences is a key challenge in AI. But what if the feedback from humans is noisy—or needs to be kept private? In this work, we introduce a new, simple method to improve how language models learn from such feedback, even when it's imperfect or protected. Our approach replaces the usual training loss with a new version that makes learning more stable and accurate. As a result, our method is the first to offer strong guarantees when both privacy and noise are present, covering cases where either the feedback, the user prompts, or both need to stay private. We also show it works well even when the model has to make complex decisions. One surprising insight from our work is that whether privacy protections are applied before or after data is corrupted makes a big difference. Overall, our findings not only improve current techniques, but also provide new theoretical tools for building more trustworthy and robust AI systems.
Primary Area: Theory->Reinforcement Learning and Planning
Keywords: Offline alignment, general function approximations, differential privacy, robustness, single-policy concentrability
Submission Number: 11783
Loading