Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face Recognition

Published: 21 Sept 2023, Last Modified: 19 Dec 2023NeurIPS 2023 oralEveryoneRevisionsBibTeX
Keywords: Bias Mitigation, Fairness, Facial Recognition
TL;DR: We find that bias is inherent to neural network architectures and hyperparameters, yet we can mitigate it by searching for fair ones
Abstract: Face recognition systems are widely deployed in safety-critical applications, including law enforcement, yet they exhibit bias across a range of socio-demographic dimensions, such as gender and race. Conventional wisdom dictates that model biases arise from biased training data. As a consequence, previous works on bias mitigation largely focused on pre-processing the training data, adding penalties to prevent bias from effecting the model during training, or post-processing predictions to debias them, yet these approaches have shown limited success on hard problems such as face recognition. In our work, we discover that biases are actually inherent to neural network architectures themselves. Following this reframing, we conduct the first neural architecture search for fairness, jointly with a search for hyperparameters. Our search outputs a suite of models which Pareto-dominate all other high-performance architectures and existing bias mitigation methods in terms of accuracy and fairness, often by large margins, on the two most widely used datasets for face identification, CelebA and VGGFace2. Furthermore, these models generalize to other datasets and sensitive attributes. We release our code, models and raw data files at
Submission Number: 81