Learning Background Invariance Improves Generalization and Robustness in Self-Supervised Learning on ImageNet and BeyondDownload PDF

Published: 24 Nov 2021, Last Modified: 05 May 2023ImageNet PPF 2021Readers: Everyone
Keywords: self-supervised learning, contrastive learning, representation learning, background invariance, augmentation
TL;DR: Learning background invariance improves generalization, robustness, label and training efficiency in self-supervised learning on ImageNet and beyond
Abstract: Recent progress in self-supervised learning has demonstrated promising results in multiple visual tasks. An important ingredient in high-performing self-supervised methods is the use of data augmentation by training models to place different augmented views of the same image nearby in embedding space. However, commonly used augmentation pipelines treat images holistically, ignoring the semantic relevance of parts of an image—e.g. a subject vs. a background—which can lead to the learning of spurious correlations. Our work addresses this problem by investigating a class of simple, yet highly effective “background augmentations", which encourage models to focus on semantically-relevant content by discouraging them from focusing on image backgrounds. Through a systematic, comprehensive investigation, we show that background augmentations lead to improved generalization with substantial improvements ($\sim$1-2% on ImageNet) in performance across a spectrum of state-of-the-art self-supervised methods (MoCo-v2, BYOL, SwAV) on a variety of tasks, even enabling performance on par with the supervised baseline. We also find improved label efficiency with even larger performance improvements in limited-labels settings (up to 4.2%). Further, we find improved training efficiency, attaining a benchmark accuracy of 74.4%, outperforming many recent self-supervised learning methods trained for 800-1000 epochs, in only 100 epochs. Importantly, we also demonstrate that background augmentations boost generalization and robustness to a number of out-of-distribution settings, including ImageNet-9, natural adversarial examples, adversarial attacks, ImageNet-Renditions and ImageNet ReaL. We also make progress in completely unsupervised saliency detection, in the process of generating saliency masks that we use for background augmentations.
Submission Track: Main track, 5 pages max
Reviewer Emails: rckrishn@ucsd.edu
Poster: pdf
1 Reply

Loading