Multiple Positive Views in Self-Supervised Learning

15 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: self-supervised learning, contrastive learning, representation learning, multiview learning
TL;DR: We introduce a plug-and-play approach for multiview self-supervised learning that enhances existing two-view frameworks, leading to significant accuracy improvements and computational efficiency across multiple datasets and architectures.
Abstract: Contrastive learning is a potent technique for self-supervised learning (SSL) that maintains invariance between two views. Advancements such as the ''core view'' (Tian et al., 2020a) or multi-cropping have harnessed insights from multiple views, culminating in the latest state-of-the-art performance. However, the complexities of multiview learning remain partially unexplored. In this paper, we introduce a ''plug-and-play'' multi-positive-views ($\geq3$) learning approach seamlessly integrated with existing two-view SSL architectures. Theoretical and empirical analyses underscore the feasibility of enhancing traditional SSL models by incorporating multiple positive views. By mitigating the intrinsic biases towards sufficiency and minimality in the embeddings, our method achieves improvements in average accuracy (2% on CIFAR-10 and 26% on Tiny ImageNet) and significant speed-ups (3--4 times) across five datasets and eight architectures. Our research reveals and improves the double-edged nature of conventional assumptions tied to two-view suitability, thereby paving the way for future investigations in multiview SSL.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 126
Loading