Improve Identity-Robustness for Face Models

ICML 2023 Workshop SCIS Submission9 Authors

Published: 20 Jun 2023, Last Modified: 28 Jul 2023SCIS 2023 PosterEveryoneRevisions
Keywords: Face Models, Robustness, Fairness, Distribution Shifts.
TL;DR: We propose a conditional inverse density method to improve the fairness of face models under unawareness.
Abstract: Despite the success of deep-learning models in many tasks, there have been concerns about such models learning shortcuts, and their lack of robustness to irrelevant confounders. When it comes to models directly trained on human faces, a sensitive confounder is that of human identities. Due to the privacy concern and cost of such annotations, improving identity-related robustness without the need for such annotations is of great importance. Here, we explore using off-the-shelf face-recognition embedding vectors, as proxies for identities, to enforce such robustness. Given an identity-independent classification task and a face dataset, we propose to use the structure in the face-recognition embedding space, to implicitly emphasize rare samples within each class. We do so by weighting samples according to their conditional inverse density (CID) in the proxy embedding space. Our experiments suggest that such a simple sample weighting scheme, not only improves the training robustness, it often improves the overall performance as a result of such robustness. We also show that employing such constraints during training results in models that are significantly less sensitive to different levels of bias in the dataset.
Submission Number: 9
Loading