TL;DR: Find hard negatives fast and with less computation for supervised contrastive learning.
Abstract: Contrastive learning is a representational learning paradigm in which a neural network maps data elements to feature vectors. It improves the feature space by forming lots with an anchor and examples that are either positive or negative based on class similarity. Hard negative examples, which are close to the anchor in the feature space but from a different class, improve learning performance. Finding such examples of high quality efficiently in large, high-dimensional datasets is computationally challenging. In this paper, we propose a GPU-friendly LSH scheme that quantizes real-valued feature vectors into binary representations for approximate nearest neighbor search. We demonstrate on several datasets from both textual and visual modalities that our approach outperforms other hard negative mining strategies in terms of computational efficiency without significant performance degradation.
Primary Area: Deep Learning->Other Representation Learning
Keywords: contrastive learning, hard negative sampling, locality sensitive hashing, representation learning, image retrieval, text retrieval
Submission Number: 9760
Loading