Keywords: algorithmic fairness, class imbalance, bias mitigation, AutoML
TL;DR: This paper introduces a new bias mitigator along with a methodology for applying automated machine learning for fairness.
Abstract: Both group bias and class imbalance occur when instances with certain characteristics are under-represented in the data. Group bias causes estimators to be unfair and class imbalance causes estimators to be inaccurate. Oversampling ought to address both kinds of under-representation. Unfortunately, it is hard to pick a level of oversampling that yields the best fairness and accuracy for a given estimator. This paper introduces Orbis, an oversampling algorithm that can be precisely tuned for both fairness and accuracy. Orbis is a pre-estimator bias mitigator that modifies the data used to train downstream estimators. This paper demonstrates how to use automated machine learning to tune Orbis along with the choice of estimator that follows it and empirically compares various approaches for blending multiple metrics into a single optimizer objective. Overall, this paper introduces a new bias mitigator along with a methodology for training and tuning it.
Submission Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Reviewers: Yes
CPU Hours: 0
GPU Hours: 0
TPU Hours: 0
Evaluation Metrics: No
Code And Dataset Supplement: zip
10 Replies
Loading