DPO Kernels: A Semantically-Aware, Kernel-Enhanced, and Divergence-Rich Paradigm for Direct Preference Optimization
Abstract: The rapid advancement of large language models (LLMs) has revolutionized numerous applications, but presents significant challenges in aligning these models with diverse human values, ethical standards, and specific user preferences. Direct Preference Optimization (DPO) has become a cornerstone for preference alignment but is constrained by reliance on fixed divergence measures and limited feature transformations. We introduce \textbf{DPO-Kernels}, an innovative enhancement of DPO that integrates kernel methods to overcome these challenges through four key contributions: (i) \textbf{Kernelized Representations}: These representations enhance divergence measures by using polynomial, RBF, Mahalanobis, and spectral kernels for richer feature transformations.
Additionally, we introduce a \textbf{hybrid loss} that combines embedding-based loss with probability-based loss; (ii) \textbf{Divergence Alternatives}: Beyond Kullback–Leibler (KL), we incorporate Jensen-Shannon, Hellinger, Rényi, Bhattacharyya, Wasserstein, and other f-divergences to boost stability and robustness; (iii) \textbf{Data-Driven Selection}: Choosing the optimal kernel-divergence pair among 28 combinations (4 kernels $\times$ 7 divergences) is challenging. We introduce automatic metrics that analyze the data to select the best kernel-divergence pair, eliminating the need for manual tuning; (iv) \textbf{Hierarchical Mixture of Kernels (HMK)}: Combining local and global kernels for precise and large-scale semantic modeling. This approach automatically selects the optimal kernel mixture during training, enhancing modeling flexibility. DPO-Kernels achieve state-of-the-art generalization in factuality, safety, reasoning, and instruction following across 12 datasets. While alignment risks overfitting, Heavy-Tailed Self-Regularization (HT-SR) theory confirms that DPO-Kernels ensure robust generalization in LLMs. Comprehensive resources are \href{https://github.com/anonymous-panda123/DPO-Kernels}{available} to facilitate further research and application of DPO-Kernels.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: alignment, dpo-kernels
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 4252
Loading