TL;DR: We propose a unified framework for understanding contrastive learning through the lens of cosine similarity, and present two key theoretical insights derived from this framework.
Abstract: Contrastive learning operates on a simple yet effective principle: Embeddings of positive pairs are pulled together, while those of negative pairs are pushed apart. In this paper, we propose a unified framework for understanding contrastive learning through the lens of cosine similarity, and present two key theoretical insights derived from this framework. First, in full-batch settings, we show that perfect alignment of positive pairs is unattainable when negative-pair similarities fall below a threshold, and this misalignment can be mitigated by incorporating within-view negative pairs into the objective. Second, in mini-batch settings, smaller batch sizes induce stronger separation among negative pairs in the embedding space, i.e., higher variance in their similarities, which in turn degrades the quality of learned representations compared to full-batch settings. To address this, we propose an auxiliary loss that reduces the variance of negative-pair similarities in mini-batch settings. Empirical results show that incorporating the proposed loss improves performance in small-batch settings.
Lay Summary: We propose a unified framework for understanding contrastive learning through the lens of cosine similarity, and present two key theoretical insights. First, in full-batch settings, we show that perfect alignment of positive pairs is unattainable when negative-pair similarities fall below a threshold, but this can be mitigated by adjusting the objective. Second, in mini-batch settings, smaller batch sizes induce stronger separation among negative pairs in the embedding space. To address this, we propose an auxiliary loss and show that incorporating it improves performance in small-batch settings.
Link To Code: https://github.com/leechungpa/embedding-similarity-cl
Primary Area: Theory
Keywords: contrastive learning, representation learning, embedding, similarity, negative pair, positive pair, variance
Submission Number: 2413
Loading