Achieving Fairness-Utility Trade-offs through Decoupling Direct and Indirect Bias

ICLR 2026 Conference Submission14593 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: fairness, regression, bias, subspace, efficiency
TL;DR: We propose a novel approach to achieve fairness-utility tradeoffs in regression models by decomposing the predictor space .
Abstract: Fairness in regression tasks is critical in high-stakes domains such as healthcare, finance, and criminal justice, where biased predictions can lead to unequal treatment. Bias can arise both directly, when sensitive attributes explicitly influence predictions and indirectly, when predictors correlated with sensitive attributes act as proxies. Existing fairness-aware regression methods often fail to address both forms of bias simultaneously, or sacrifice predictive performance. We propose Fair Envelope Regression Models (FERM), a novel framework that brings structure-aware subspace decomposition techniques from envelope regression into fairness-aware learning. FERM decomposes the predictor space into four orthogonal components: variation uniquely informative about the response, variation associated with sensitive attributes, shared variation, and residual noise. By penalizing only the sensitive component, FERM provides explicit and interpretable control over the fairness-utility trade-off. Unlike black-box approaches, FERM offers interpretable estimators with statistical efficiency guarantees under a fully parametric linear model. We validate FERM through extensive simulations and real-world experiments, showing improved fairness and predictive accuracy compared to prior work. Our results highlight envelope-based decomposition as a principled and powerful tool for building fair, efficient, and interpretable regression models.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 14593
Loading