Rényi Supervised Contrastive Learning for Transferable RepresentationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Supervised Learning, Representation Learning, Contrastive Learning, Tranfser Learning
TL;DR: We present an effective and robust method to learn transferable representation by Rényi supervised contrastive learning.
Abstract: A mighty goal of representation learning is to train a feature that can transfer to various tasks or datasets. A conventional approach is to pre-train a neural network on a large-scale labeled dataset, e.g., ImageNet, and use its feature for downstream tasks. However, the feature often lacks transferability due to the class-collapse issue; existing supervised losses (such as cross-entropy) restrain the intra-class variation and limit the capability of learning rich representations. This issue becomes more severe when pre-training datasets are class-imbalanced or coarse-labeled. To address the problem, we propose a new representation learning method, named R\'enyi supervised contrastive learning~(R\'enyiSCL), which can effectively learn transferable representation using a labeled dataset. Our main idea is to use the recently proposed self-supervised R\'enyi contrastive learning in the supervised setup. We show that R\'enyiSCL can mitigate the class-collapse problem by contrasting features with both instance-wise and class-wise information. Through experiments on the ImageNet dataset, we show that R\'enyiSCL outperforms all supervised and self-supervised methods under various transfer learning tasks. In particular, we also validate the effectiveness of R\'enyiSCL under class-imbalanced or coarse-labeled datasets.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
9 Replies

Loading