Abstract: We propose and analyze an algorithmic framework for “bias bounties” — events in which external
participants are invited to propose improvements to a trained model, akin to bug bounty events in software and security. Our framework allows participants to submit arbitrary subgroup improvements, which
are then algorithmically incorporated into an updated model. Our algorithm has the property that there
is no tension between overall and subgroup accuracies, nor between different subgroup accuracies, and it
enjoys provable convergence to either the Bayes optimal model or a state in which no further improvements can be found by the participants. We provide formal analyses of our framework, experimental
evaluation, and findings from a preliminary bias bounty event.1
Loading