Evaluating and Improving Robustness of Self-Supervised Representations to Spurious CorrelationsDownload PDF

28 May 2022, 15:03 (modified: 21 Jul 2022, 01:30)SCIS 2022 PosterReaders: Everyone
Keywords: Representation Learning, Self-Supervised Learning, Spurious Correlations
TL;DR: We evaluate robustness of self-supervised models to spurious correlations and improve downstream worst-group performance by adding Late-layer transformation-based view generation modules.
Abstract: Recent empirical studies have found inductive biases in supervised learning toward simple features that may be spuriously correlated with the label, resulting in suboptimal performance on minority subgroups. Despite the growing popularity of methods which learn representations from unlabeled data, it is unclear how potential spurious features may be manifested in the learnt representations. In this work, we explore whether recent Self-Supervised Learning (SSL) methods would produce representations which exhibit similar behaviors under spurious correlation. First, we show that classical approaches in combating spurious correlations, such as dataset re-sampling during SSL, do not consistently lead to invariant representation. Second, we find that spurious information is represented disproportionately heavily in the later layers of the encoder. Motivated by these findings, we propose a method to remove spurious information from these representations during pre-training, by pruning or re-initializing later layers of the encoder. We find that our method produces representations which outperform the baseline on three datasets, without the need for group or label information during SSL.
Confirmation: Yes
0 Replies