ASRank: Zero-Shot Re-Ranking with Answer Scent for Document Retrieval

ACL ARR 2024 June Submission1694 Authors

14 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Retrieval-Augmented Generation (RAG) models have drawn considerable attention in modern open-domain question answering. The effectiveness of RAG depends on the quality of the top retrieved documents. However, conventional retrieval methods sometimes fail to rank the most relevant documents at the top. In this paper, we introduce ASRank, a new re-ranking method based on scoring retrieved documents using a zero-shot answer scent, which relies on a pretrained large language model to compute the likelihood of the document-derived answers aligning with the answer scent. Our approach demonstrates marked improvements across several datasets, including NQ, TriviaQA, WebQA, ArchivalQA, HotpotQA, and Entity Questions. Notably, ASRank increases Top-1 retrieval accuracy on NQ from $19.2\%$ to $46.5\%$ for MSS and from $22.1\%$ to $47.3\%$ for BM25. Finally, ASRank shows strong retrieval performance on several datasets compared to state-of-the-art methods 47.3 Top-1 by ASRank vs 35.4 by UPR (Sachan et al., 2022) by BM25.
Paper Type: Long
Research Area: Information Retrieval and Text Mining
Research Area Keywords: passage retrieval; dense retrieval; re-ranking; open-domain QA; retrieval-augmented generation;
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 1694
Loading