What Should Not Be Contrastive in Contrastive LearningDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 PosterReaders: Everyone
Keywords: Self-supervised learning, Contrastive learning, Representation learning
Abstract: Recent self-supervised contrastive methods have been able to produce impressive transferable visual representations by learning to be invariant to different data augmentations. However, these methods implicitly assume a particular set of representational invariances (e.g., invariance to color), and can perform poorly when a downstream task violates this assumption (e.g., distinguishing red vs. yellow cars). We introduce a contrastive learning framework which does not require prior knowledge of specific, task-dependent invariances. Our model learns to capture varying and invariant factors for visual representations by constructing separate embedding spaces, each of which is invariant to all but one augmentation. We use a multi-head network with a shared backbone which captures information across each augmentation and alone outperforms all baselines on downstream tasks. We further find that the concatenation of the invariant and varying spaces performs best across all tasks we investigate, including coarse-grained, fine-grained, and few-shot downstream classification tasks, and various data corruptions.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Data: [CUB-200-2011](https://paperswithcode.com/dataset/cub-200-2011), [ImageNet-C](https://paperswithcode.com/dataset/imagenet-c), [ObjectNet](https://paperswithcode.com/dataset/objectnet), [Oxford 102 Flower](https://paperswithcode.com/dataset/oxford-102-flower), [iNaturalist](https://paperswithcode.com/dataset/inaturalist)
12 Replies

Loading