ATEB: Rethinking Advanced NLP Tasks in an Information Retrieval Setting

Published: 07 Jul 2025, Last Modified: 07 Jul 2025KnowFM @ ACL 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: language models, safety, factuality, retrieval
TL;DR: This paper studies evaluates advanced text embeddings for advanced NLP tasks and presents a new method for improving their performance on advanced NLP tasks.
Abstract: Traditional text embedding benchmarks primarily evaluate embedding models' capabilities to capture semantic similarity. However, more advanced NLP tasks that require a deeper understanding of text, such as safety and factuality. These tasks demand an ability to comprehend and process complex information, often involving the handling of sensitive content, or the verification of factual statements against reliable sources. We introduce a new benchmark designed to assess and highlight the limitations of embedding models trained on existing information retrieval data mixtures on advanced capabilities, which include factuality, safety, instruction following, reasoning and document-level understanding. This benchmark includes a diverse set of tasks that simulate real-world scenarios where these capabilities are critical and leads to identification of the gaps of the currently advanced embedding models. Furthermore, we propose a novel method that reformulates these various tasks as retrieval tasks during the fine-tuning process. By framing tasks like safety or factuality classification as retrieval problems, we leverage the strengths of embedding models in capturing semantic relationships while also pushing them to develop a deeper understanding of context and content. Using this approach with single-task fine-tuning, we achieved performance gains of 8% on factuality classification and 13% on safety classification. Our code and data will be publicly available.
Archival Status: Non-archival (not included in proceedings)
Submission Number: 43
Loading