Improving Self-Supervised Learning by Characterizing Idealized RepresentationsDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 12 Jan 2023, 20:10NeurIPS 2022 AcceptReaders: Everyone
Keywords: Self-Supervised Learning, Invariances, Contrastive Learning, Machine Learning, Representation Learning
TL;DR: We characterize idealized self-supervised representations, which leads to actionable insights for improving SSL algorithms.
Abstract: Despite the empirical successes of self-supervised learning (SSL) methods, it is unclear what characteristics of their representations lead to high downstream accuracies. In this work, we characterize properties that SSL representations should ideally satisfy. Specifically, we prove necessary and sufficient conditions such that for any task invariant to given data augmentations, probes (e.g., linear or MLP) trained on that representation attain perfect accuracy. These requirements lead to a unifying conceptual framework for improving existing SSL methods and deriving new ones. For contrastive learning, our framework prescribes simple but significant improvements to previous methods such as using asymmetric projection heads. For non-contrastive learning, we use our framework to derive a simple and novel objective. Our resulting SSL algorithms outperform baselines on standard benchmarks, including SwAV+multicrops on linear probing of ImageNet.
Supplementary Material: pdf
25 Replies

Loading