Contrastive Learning Can Find An Optimal Basis For Approximately View-Invariant FunctionsDownload PDF

Published: 01 Feb 2023, Last Modified: 14 Feb 2023ICLR 2023 posterReaders: Everyone
Keywords: contrastive learning, self-supervised learning, representation learning, kernel, kernel PCA, positive definite, eigenfunction, spectral clustering, invariance, Markov chain, minimax optimal
TL;DR: We show that existing contrastive objectives approximate a "positive-pair kernel", and that applying Kernel PCA produces a representation that is provably optimal for supervised learning of functions that assign similar values to positive pairs.
Abstract: Contrastive learning is a powerful framework for learning self-supervised representations that generalize well to downstream supervised tasks. We show that multiple existing contrastive learning methods can be reinterpeted as learning kernel functions that approximate a fixed *positive-pair kernel*. We then prove that a simple representation obtained by combining this kernel with PCA provably minimizes the worst-case approximation error of linear predictors, under a straightforward assumption that positive pairs have similar labels. Our analysis is based on a decomposition of the target function in terms of the eigenfunctions of a positive-pair Markov chain, and a surprising equivalence between these eigenfunctions and the output of Kernel PCA. We give generalization bounds for downstream linear prediction using our kernel PCA representation, and show empirically on a set of synthetic tasks that applying kernel PCA to contrastive learning models can indeed approximately recover the Markov chain eigenfunctions, although the accuracy depends on the kernel parameterization as well as on the augmentation strength.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Unsupervised and Self-supervised learning
9 Replies

Loading