everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Predictive models that satisfy group fairness criteria in aggregate for members of a protected class, but do not guarantee subgroup fairness, could produce biased predictions for individuals at the intersection of two or more protected classes. To address this risk, we propose Conditional Bias Scan (CBS), an auditing framework for detecting intersectional biases in classification models. CBS identifies the subgroup with the most significant bias against the protected class, compared to the equivalent subgroup in the non-protected class, and can incorporate multiple commonly used fairness definitions for both probabilistic and binarized predictions. We show that this methodology can detect subgroup biases in the COMPAS pre-trial risk assessment tool and in German Credit Data, and has higher bias detection power compared to similar methods that audit for subgroup fairness.