Contrastive Learning with Hard Negative SamplesDownload PDF

Published: 12 Jan 2021, Last Modified: 22 Oct 2023ICLR 2021 PosterReaders: Everyone
Keywords: contrastive learning, unsupervised representation learning, hard negative sampling
Abstract: We consider the question: how can you sample good negative examples for contrastive learning? We argue that, as with metric learning, learning contrastive representations benefits from hard negative samples (i.e., points that are difficult to distinguish from an anchor point). The key challenge toward using hard negatives is that contrastive methods must remain unsupervised, making it infeasible to adopt existing negative sampling strategies that use label information. In response, we develop a new class of unsupervised methods for selecting hard negative samples where the user can control the amount of hardness. A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible. The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: We introduce an unsupervised method for sampling hard negatives for contrastive learning: the resulting embeddings have desirable theoretical properties, and have improved downstream performance on multiple different data modalities.
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) joshr17/HCL](https://github.com/joshr17/HCL)
Data: [MPQA Opinion Corpus](https://paperswithcode.com/dataset/mpqa-opinion-corpus)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 5 code implementations](https://www.catalyzex.com/paper/arxiv:2010.04592/code)
11 Replies

Loading