Letriever: Using Large Language Models as Contextual Retriever for Long-Context Question Answering

ACL ARR 2024 December Submission1869 Authors

16 Dec 2024 (modified: 05 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) have demonstrated outstanding performance on Question Answering (QA) tasks. However, they face significant challenges in long-context QA due to difficulties in effectively utilizing lengthy inputs, resulting in irrelevant responses. While Retrieval-Augmented Generation (RAG) frameworks have been employed to address this issue, they remain limited by retrieval methods that prioritize superficial lexical overlaps, leading to suboptimal context selection. In this study, we propose Letriever, which replaces the traditional embedding-based retriever with an LLM-based retriever. By leveraging the advanced comprehension capabilities of LLMs, Letriever enhances retrieval precision and answer accuracy across diverse QA benchmarks. Our findings highlight the potential of LLMs to transform retrieval mechanisms in QA systems.
Paper Type: Long
Research Area: Information Retrieval and Text Mining
Research Area Keywords: Information Retrieval and Text Mining, Question Answering, Generation, Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 1869
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview