Keywords: Preference Optimization, latent intent, Intention alignment
Abstract: Human preferences are diverse and dynamic, shaped by regional, cultural, and social factors. Existing alignment methods like Direct Preference Optimization (DPO) and its variants often default to majority views, overlooking minority opinions and failing to capture latent user intentions in prompts.
To address these limitations, we introduce Adaptive Intent-driven Preference Optimization (A-IPO). Specifically, A-IPO introduces an intention module that infers the latent intent behind each user prompt and explicitly incorporates this inferred intent into the reward function, encouraging stronger alignment between the preferred model's responses and the user's underlying intentions. We demonstrate, both theoretically and empirically, that incorporating an intention--response similarity term increases the preference margin (by a positive shift of $\lambda\,\Delta\mathrm{sim}$ in the log-odds), resulting in clearer separation between preferred and dispreferred responses compared to DPO.
For evaluation, we introduce two new benchmarks, Real-pref, Attack-pref along with an extended version of an existing dataset, GlobalOpinionQA-Ext, to assess real-world and adversarial preference alignment.
Through explicit modeling of diverse user intents, A-IPO facilitates pluralistic preference optimization while simultaneously enhancing adversarial robustness in preference alignment. Comprehensive empirical evaluation demonstrates that A-IPO consistently surpasses existing baselines, yielding substantial improvements across key metrics: up to +24.8 win-rate and +45.6 Response-Intention Consistency (RIC) on Real-pref; up to +38.6 Response Similarity (RS) and +52.2 Defense Success Rate (DSR) on Attack-pref; and up to +54.6 Intention Consistency Score (ICS) on GlobalOpinionQA-Ext.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 10196
Loading