Context is Gold to find the Gold Passage: Evaluating and Training Contextual Document Embeddings

ACL ARR 2025 May Submission7892 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: A limitation of modern document retrieval embedding methods is that they typically encode passages (chunks) from the same documents independently, often overlooking crucial contextual information from the rest of the document that could greatly improve individual chunk representations. In this work, we introduce ConTEB (Context-aware Text Embedding Benchmark), a benchmark designed to evaluate retrieval models on their ability to leverage document-wide context. Our results show that state-of-the-art embedding models struggle in retrieval scenarios where context is required. To address this limitation, we propose InSeNT(In-sequence Negative Training), a novel contrastive post-training approach which combined with $\textit{late chunking}$ pooling enhances contextual representation learning while preserving computational efficiency. Our method significantly improves retrieval quality on ConTEB without sacrificing base model performance. We further find chunks embedded with our method are more robust to suboptimal chunking strategies and larger retrieval corpus sizes. We open-source all artifacts at http://hf.co/anonymous.
Paper Type: Long
Research Area: Information Retrieval and Text Mining
Research Area Keywords: embedding, long context, retrieval, contextual embeddings
Contribution Types: Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models, Data resources
Languages Studied: english
Submission Number: 7892
Loading