Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding MethodsDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 15 Oct 2022, 21:23NeurIPS 2022 AcceptReaders: Everyone
Keywords: self-supervised learning, interpretability, understanding, local spectral methods, global spectral methods
TL;DR: We unify self-supervised methods under the realm of spectral embedding (local and global for contrastive vs non-contrastive learning) shedding new lights into the benefits of each
Abstract: Self-Supervised Learning (SSL) surmises that inputs and pairwise positive relationships are enough to learn meaningful representations. Although SSL has recently reached a milestone: outperforming supervised methods in many modalities\dots the theoretical foundations are limited, method-specific, and fail to provide principled design guidelines to practitioners. In this paper, we propose a unifying framework under the helm of spectral manifold learning. Through the course of this study, we will demonstrate that VICReg, SimCLR, BarlowTwins et al. correspond to eponymous spectral methods such as Laplacian Eigenmaps, ISOMAP et al. From this unified viewpoint, we obtain (i) the close-form optimal representation, (ii) the close-form optimal network parameters in the linear regime, (iii) the impact of the pairwise relations used during training on each of those quantities and on downstream task performances, and most importantly, (iv) the first theoretical bridge between contrastive and non-contrastive methods to global and local spectral methods respectively hinting at the benefits and limitations of each. For example, if the pairwise relation is aligned with the downstream task, all SSL methods produce optimal representations for that downstream task.
Supplementary Material: pdf
18 Replies

Loading