Fair Out-of-Distribution Detection

10 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Out-of-distribution, Trustworthy Machine Learning
TL;DR: This work first uncovers fairness confusion in out-of-distribution detection due to sensitive attributes, proposing a novel metric and solution, with extensive experiments validating their effectiveness.
Abstract: Out-of-Distribution (OOD) detection prevents models from misclassifying OOD data that fall outside the in-distribution (ID) classes as ID categories. However, existing OOD detection methods ignore a critical metric, i.e., fairness metric. This oversight could result in unreliable predictions due to sensitive attributes in the data. To fill this gap, we introduce a novel and challenging problem termed \textit{Fair OOD Detection} in this paper, which simultaneously considers OOD detection and bias induced by Fairness Confusion (FC) caused by sensitive attributes and their induced Feature Shifts (FS). Furthermore, we propose a novel metric termed Fair-OOD to identify FC phenomena in OOD detection, and a theoretically guaranteed semi-supervised solution named Predictive Adaptive Calibration (PACT) to simultaneously enhance OOD detection capability, ensure fairness, and mitigate FC without requiring the label of sensitive attribute for OOD data. Extensive experiments demonstrate that: (a) Fair-OOD can identify FC issues in models that existing fairness metrics fail to detect; (b) PACT effectively improves OOD detection performance while eliminating both FC and unfairness issues.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 3561
Loading