On the Importance of Architectures and Hyperparameters for Fairness in Face RecognitionDownload PDF

04 Oct 2022, 23:09 (modified: 26 Nov 2022, 09:49)NeurIPS 2022 Workshop MetaLearn PosterReaders: Everyone
Keywords: Face Recognition, Fairness, Neural Architecture Search, Hyperparameters
TL;DR: We analyze the impact of architectures and hyperparameters on fairness in face recognition and use NAS to design simultaneously fairer and more accurate models.
Abstract: Face recognition systems are used widely but are known to exhibit bias across a range of sociodemographic dimensions, such as gender and race. An array of works proposing pre-processing, training, and post-processing methods have failed to close these gaps. Here, we take a very different approach to this problem, identifying that both architectures and hyperparameters of neural networks are instrumental in reducing bias. We first run a large-scale analysis of the impact of architectures and training hyperparameters on several common fairness metrics and show that the implicit convention of choosing high-accuracy architectures may be suboptimal for fairness. Motivated by our findings, we run the first neural architecture search for fairness, jointly with a search for hyperparameters. We output a suite of models which Pareto-dominate all other competitive architectures in terms of accuracy and fairness. Furthermore, we show that these models transfer well to other face recognition datasets with similar and distinct protected attributes. We release our code and raw result files so that researchers and practitioners can replace our fairness metrics with a bias measure of their choice.
0 Replies