UnDiMix: Hard Negative Sampling Strategies for Contrastive Representation LearningDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Contrastive Learning, Self-Supervised Learning, Hard Negative Sampling
TL;DR: We introduce UnDimix, a hard negative sampling strategy that takes into account anchor similarity, model uncertainty and representativeness
Abstract: One of the challenges in contrastive learning is the selection of appropriate \textit{hard negative} examples, in the absence of label information. Random sampling or importance sampling methods based on feature similarity often lead to sub-optimal performance. In this work, we introduce \modelname, a hard negative sampling strategy that takes into account anchor similarity, model uncertainty and diversity. Experimental results on several benchmarks show that \modelname improves negative sample selection, and subsequently downstream performance when compared to state-of-the-art contrastive learning methods. Code is available at \textit{anon. link
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Unsupervised and Self-supervised learning
20 Replies

Loading