Keywords: self-supervised learning, language models, contrastive learning, transformers, natural language processing
Abstract: In NLP, sentence embeddings are crucial for many tasks such as information retrieval, classification, clustering, or visualizing collections of texts. Currently, top-performing sentence embeddings are derived from pre-trained language models that undergo extensive supervised fine-tuning. This contrasts with computer vision, where self-supervised training has demonstrated remarkable success. Here we show that self-supervision alone can produce high-quality sentence embeddings, albeit slightly below those from state-of-the-art supervised models. We systematically compare several existing augmentation strategies for positive pair generation in contrastive learning and show that text crops strongly outperform popular dropout-based augmentation. Using text crops, well-performing embeddings can be obtained even when training from scratch without using pre-trained model weights, or when training a bare token embedding layer without any transformer architecture. Overall, we show that self-supervised learning allows rapid training of text embeddings of a given dataset.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10683
Loading