Auditing Predictive Models for Intersectional Biases

TMLR Paper6342 Authors

30 Oct 2025 (modified: 05 Nov 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Predictive models that satisfy group fairness criteria in aggregate for members of a protected class, but do not guarantee subgroup fairness, could produce biased predictions for individuals at the intersection of two or more protected classes. To address this risk, we propose Conditional Bias Scan (CBS), an auditing framework for detecting intersectional biases in the outputs of classification models that may lead to disparate impact. CBS identifies the subgroup with the most significant bias against the protected class, compared to the equivalent subgroup in the non-protected class. The framework can audit for predictive biases using common group fairness definitions (separation and sufficiency) for both probabilistic and binarized predictions. We show through empirical evaluations that this methodology has significantly higher bias detection power compared to similar methods that audit for subgroup fairness. We then use this approach to detect statistically significant intersectional biases in the predictions of the COMPAS pre-trial risk assessment tool and a model trained on the German Credit data.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Sivan_Sabato1
Submission Number: 6342
Loading