GASE: Generatively Augmented Sentence Encoding

ACL ARR 2025 May Submission3541 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We propose a training-free approach to improve sentence embeddings leveraging test-time compute by applying generative text models for data augmentation at inference time. Unlike conventional data augmentation that utilises synthetic training data, our approach does not require access to model parameters or the computational resources typically required for fine-tuning state-of-the-art models. Generatively Augmented Sentence Encoding variates the input text by paraphrasing, summarising, or extracting keywords, followed by pooling the original and synthetic embeddings. Experimental results on the Massive Text Embedding Benchmark for Semantic Textual Similarity (STS) demonstrate performance improvements across a range of embedding models using different generative models for augmentation. We find that generative augmentation leads to larger performance improvements for embedding models with lower baseline performance. These findings suggest that integrating generative augmentation at inference time adds semantic diversity and can enhance the robustness and generalisability of sentence embeddings for embedding models. Our results show that performance gains depend on the embedding model and the dataset.
Paper Type: Short
Research Area: Semantics: Lexical and Sentence-Level
Research Area Keywords: test-time compute, semantic textual similarity, phrase/sentence embedding, paraphrasing, data augmentation, representation learning, inference methods, abstractive summarisation, sentence compression
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English, Arabic, German, Turkish, Spanish, French, Italian, Korean, Dutch, Polish, Chinese, Russian
Submission Number: 3541
Loading