Label Similarity Aware Contrastive LearningDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Contrastive Learning, Supervised Learning, Representation Learning
Abstract: Contrastive learning dramatically improves the performance in self-supervised learning by maximizing the alignment between two representations obtained from the same sample while distributing all representations uniformly. Extended supervised contrastive learning boosts downstream performance by pulling embedding vectors belonging to the same class together, even though vectors are obtained from different samples. In this work, we generalize the supervised contrastive learning approach to the universal framework allowing us to fully utilize the ground truth similarities between samples. All pairs of representations are relatively pulled together in proportion to the label similarity, not equally pulling representations having the same class label. To quantitatively interpret the feature space after contrastive learning, we propose a label similarity aware alignment and uniformity, which measures how genuinely similar samples are aligned and how feature distribution preserves the maximal information. We prove asymptotically and empirically that our proposed contrastive loss optimizes two properties, and optimized properties positively affect task performance. Comprehensive experiments on NLP, Vision, Graph, and Multimodal benchmark datasets using BERT, ResNet, GIN, and LSTM encoders consistently showed that our loss outperforms the previous self-supervised and supervised contrastive losses upon a wide range of data types and corresponding encoder architectures. Introducing a task-specific label similarity function further facilitates downstream performance.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
TL;DR: Label similarity aware contrastive learning builds better representation space and improves downstream performance via optimizing alignment and uniformity.
Supplementary Material: zip
5 Replies

Loading