Learning Contrastive Embedding in Low-Dimensional SpaceDownload PDF

Published: 31 Oct 2022, Last Modified: 13 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: contrastive learning, dimensionality reduction, autoencoder, representation learning
Abstract: Contrastive learning (CL) pretrains feature embeddings to scatter instances in the feature space so that the training data can be well discriminated. Most existing CL techniques usually encourage learning such feature embeddings in the highdimensional space to maximize the instance discrimination. However, this practice may lead to undesired results where the scattering instances are sparsely distributed in the high-dimensional feature space, making it difficult to capture the underlying similarity between pairwise instances. To this end, we propose a novel framework called contrastive learning with low-dimensional reconstruction (CLLR), which adopts a regularized projection layer to reduce the dimensionality of the feature embedding. In CLLR, we build the sparse / low-rank regularizer to adaptively reconstruct a low-dimensional projection space while preserving the basic objective for instance discrimination, and thus successfully learning contrastive embeddings that alleviate the above issue. Theoretically, we prove a tighter error bound for CLLR; empirically, the superiority of CLLR is demonstrated across multiple domains. Both theoretical and experimental results emphasize the significance of learning low-dimensional contrastive embeddings.
TL;DR: We investigate the curse of dimensionality in CL and propose a new method to demonstrate the significance of learning low-dimensional contrastive embeddings.
Supplementary Material: zip
18 Replies

Loading