RAG-EF: Train-free RAG Enhancement from Expert Feedbacks

ACL ARR 2025 February Submission1042 Authors

12 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: As large language models (LLMs) become increasingly integral to digital interactions, their susceptibility to generating inaccurate or nonsensical content, called hallucination, poses significant challenges. Retrieval-Augmented Generation (RAG) has emerged as a promising technique to curb hallucinations by leveraging external databases to inform response generation. However, the RAG framework is not without limitations, often requiring computationally expensive methods like domain-specific retrieval-augmented finetuning. We introduce a novel and efficient enhancement RAG framework, RAG-EF (RAG with Expert Feedback), which incorporates expert-provided feedback composed of problematic Q&A and context pairs. Also, we present a new retrieval strategy that utilizes contexts alongside Q&A pairs to optimize information selection and prevent incorrect responses. To show the effectiveness of RAG-EF, we establish three new benchmarks with three datasets, and demonstrate adding relevant feedback into the database greatly improves the performance.
Paper Type: Short
Research Area: NLP Applications
Research Area Keywords: retrieval-augmented generation (RAG)
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: english
Submission Number: 1042
Loading