Learning Concept Credible Models for Mitigating ShortcutsDownload PDF

Published: 31 Oct 2022, Last Modified: 14 Dec 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: shortcuts, spurious correlations, deep learning
TL;DR: We present two approaches for mitigating shortcuts that combine known concepts (representations based on domain knowledge) and unknown concepts by fitting residuals or extending a recently proposed regularization penalty.
Abstract: During training, models can exploit spurious correlations as shortcuts, resulting in poor generalization performance when shortcuts do not persist. In this work, assuming access to a representation based on domain knowledge (i.e., known concepts) that is invariant to shortcuts, we aim to learn robust and accurate models from biased training data. In contrast to previous work, we do not rely solely on known concepts, but allow the model to also learn unknown concepts. We propose two approaches for mitigating shortcuts that incorporate domain knowledge, while accounting for potentially important yet unknown concepts. The first approach is two-staged. After fitting a model using known concepts, it accounts for the residual using unknown concepts. While flexible, we show that this approach is vulnerable when shortcuts are correlated with the unknown concepts. This limitation is addressed by our second approach that extends a recently proposed regularization penalty. Applied to two real-world datasets, we demonstrate that both approaches can successfully mitigate shortcut learning.
Supplementary Material: pdf
10 Replies

Loading