FairICP: Encouraging Equalized Odds via Inverse Conditional Permutation

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: A fairness-aware learning approach for complex sensitive attributes to achieve equalized odds: employing adversarial learning with novel inverse conditional permutations.
Abstract: *Equalized odds*, an important notion of algorithmic fairness, aims to ensure that sensitive variables, such as race and gender, do not unfairly influence the algorithm's prediction when conditioning on the true outcome. Despite rapid advancements, current research primarily focuses on equalized odds violations caused by a single sensitive attribute, leaving the challenge of simultaneously accounting for multiple attributes under-addressed. We bridge this gap by introducing an in-processing fairness-aware learning approach, FairICP, which integrates adversarial learning with a novel inverse conditional permutation scheme. FairICP offers a flexible and efficient scheme to promote equalized odds under fairness conditions described by complex and multi-dimensional sensitive attributes. The efficacy and adaptability of our method are demonstrated through both simulation studies and empirical analyses of real-world datasets.
Lay Summary: Machine learning tools are widely used to support decision-making in areas like healthcare, employment, and public services. However, these systems can sometimes work better for certain groups than others—such as people of a particular age or sex. Addressing this issue requires tools that promote balanced performance across multiple factors simultaneously. Our study introduces a method called FairICP, which helps machine learning models make predictions that are both accurate and more balanced across different groups of people. FairICP adjusts how models learn from data, preventing them from forming misleading and spurious associations between sensitive personal attributes and outcomes—unless such associations are supported across population groups in the data. We tested FairICP on both simulated and real-world datasets, including health and social information, and found that it reduced disparities without significantly lowering accuracy. By supporting fairness across multiple factors at once, FairICP brings us closer to building responsible AI tools that are more inclusive and trustworthy for everyone.
Link To Code: https://github.com/yuhenglai/FairICP
Primary Area: Social Aspects->Fairness
Keywords: Algorithmic fairness, Equalized odds, Adversarial learning, Inverse conditional permutation
Submission Number: 10808
Loading