Keywords: RAG, LLM, multi-hop retrieval, reasoning, pragmatics
TL;DR: We propose a pragmatics-inspired unsupervised method to highlight important information in contexts retrieved by standard RAG systems, showing that highlighting evidence within such contexts can improve LLM QA accuracy.
Abstract: We propose a simple, unsupervised method that injects pragmatic principles in retrieval-augmented generation (RAG) frameworks such as Dense Passage Retrieval (DPR). Our approach first identifies which sentences in a pool of documents retrieved by RAG are most relevant to the question at hand, cover all the topics addressed in the input question and no more, and then highlights these sentences in the documents before they are provided to the LLM. We show that this simple idea brings consistent improvements in experiments on three question answering tasks (ARC-Challenge, PubHealth and PopQA) using three different LLMs. It notably enhances accuracy by up to 19.7% compared to a conventional RAG system on PubHealth.
Submission Number: 30
Loading