Benchmarking Robustness to Natural Distribution Shifts for Facial AnalysisDownload PDF

09 Oct 2021, 14:49 (modified: 10 Dec 2021, 12:41)NeurIPS 2021 Workshop DistShift PosterReaders: Everyone
Keywords: distribution shifts, facial analysis, calibration, label noise, subpopulation shift, spurious correlations
TL;DR: We introduce several natural distribution shifts using facial data and investigate their impact on accuracy and calibration.
Abstract: During the deployment of machine learning models, performance degradation can occur compared to the training and validation data. This generalization gap can appear for a variety of reasons and be particularly critical in applications where certain groups of people are disadvantaged by the outcome, e.g. facial analysis. Literature provides a vast amount of methods to either perform robust classification under distribution shifts or at least to express the uncertainty caused by the shifts. However, there is still a need for data that exhibit different natural distribution shifts considering specific subgroups to test these methods. We use a balanced dataset for facial analysis and introduce subpopulation shifts, spurious correlations, and subpopulation-specific label noise. This forms our basis to investigate to what extent known approaches for calibrating neural networks remain reliable under these specified shifts. Each of the modifications leads to performance degradation, but the combination of ensembles and temperature scaling is particularly useful to stabilize the calibration over the shifts.
1 Reply