Self-supervised representation learning via adaptive hard-positive miningDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: self-supervised learning, contrastive learning, unsupervised classification
Abstract: Despite their success in perception over the last decade, deep neural networks are also known ravenous to labeled data for training, which limits their applicability to real-world problems. Hence self-supervised learning has recently attracted intensive attention. Contrastive learning has been one of the dominant approaches for effective feature extraction and has also achieved state-of-the-art performance. In this paper, we first theoretically show that these methods cannot fully take advantage of training samples in the sense of nearest positive samples mining. Then we propose a new contrastive method called AdaCLR$^{pre}$ (adaptive self-supervised contrastive learning representations), which can more effectively (supported by our proof) explore the samples in a way of being closer to supervised contrastive learning. We thoroughly evaluate the quality of the learned representation on ImageNet for pretraining based version (AdaCLR$^{pre}$). The results of accuracy show AdaCLR$^{pre}$ outperforms state-of-the-art contrastive-based models by 3.0\% with extra 100 epochs.
One-sentence Summary: A new approach to fill the gap of supervised contrastive learning and self-supervised contrastive learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=8So8pjRUwK
16 Replies

Loading