Abstract: Large Language Models (LLMs) have demonstrated outstanding performance on Question Answering (QA) tasks. However, they face significant challenges in long-context QA due to difficulties in effectively utilizing lengthy inputs, resulting in irrelevant responses. While Retrieval-Augmented Generation (RAG) frameworks have been employed to address this issue, they remain limited by retrieval methods that prioritize superficial lexical overlaps, leading to suboptimal context selection. In this study, we propose Letriever, which replaces the traditional embedding-based retriever with an LLM-based retriever. By leveraging the advanced comprehension capabilities of LLMs, Letriever enhances retrieval precision and answer accuracy across diverse QA benchmarks. Our findings highlight the potential of LLMs to transform retrieval mechanisms in QA systems.
Paper Type: Long
Research Area: Information Retrieval and Text Mining
Research Area Keywords: Information Retrieval and Text Mining, Question Answering, Generation, Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 1869
Loading