FairPO: Fair Preference Optimization for Multi-Label Learning

Published: 22 Sept 2025, Last Modified: 01 Dec 2025NeurIPS 2025 WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Preference Optimisation, DPO, MLC
TL;DR: Our framework, FairPO, uses preference-based losses to fix critical ranking errors for rare labels and robust optimization to manage the fairness-performance trade-off in multi-label classification.
Abstract: Multi-label classification (MLC) often suffers from performance disparities across labels. We propose FairPO, a framework combining preference-based loss and group-robust optimization to improve fairness by targeting underperforming labels. FairPO partitions labels into a privileged set for targeted improvement and a non-privileged set to maintain baseline performance. For privileged labels, a DPO-inspired preference loss addresses hard examples by correcting ranking errors between true labels and their confusing counterparts. A constrained objective maintains performance for non-privileged labels, while a Group Robust Preference Optimization (GRPO) formulation adaptively balances both objectives to mitigate bias. We also demonstrate FairPO's versatility with reference-free variants using Contrastive (CPO) and Simple (SimPO) Preference Optimization. Our code is available at GitHub: https://anonymous.4open.science/r/FairPO.
Submission Number: 102
Loading