Keywords: Algorithmic fairness, Subgroup, Adversarial learning, Data sparsity
Abstract: Algorithmic fairness is a socially crucial topic in real-world applications of AI.
Among many notions of fairness, subgroup fairness is widely studied when multiple sensitive attributes (e.g., gender, race, and age) are present.
However, as the number of sensitive attributes grows, the number of subgroups increases accordingly, creating heavy computational burden and data sparsity problem (i.e., subgroups with very small sample sizes).
In this paper, we develop a novel learning algorithm for subgroup fairness that resolves these issues by focusing on sufficiently large subgroups as well as marginal fairness (fairness for each sensitive attribute).
To this end, we formalize a notion of subgroup-subset fairness and introduce a corresponding distributional fairness measure called the supremum Integral Probability Metric (supIPM).
Building on this formulation, we propose the Doubly Regressing Adversarial learning for subgroup Fairness (DRAF) algorithm, which reduces a surrogate fairness gap for supIPM with much less computation than directly reducing supIPM.
Theoretically, we prove that the proposed surrogate fairness gap is an upper bound of supIPM.
Empirically, we show that the DRAF algorithm outperforms baseline methods on benchmark datasets, particularly when the number of sensitive attributes is large so that many subgroups are very small.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 18082
Loading