Leveraging LLM and User Feedback to Improve Retrieval-Augmented Generation When Question and Answer Domains Shift
Abstract: Retrieval-Augmented Generation (RAG) attempts to mitigate the issue of outdated knowledge and hallucinations in large language models (LLMs) by retrieving real-time and accurate information for LLMs. Nevertheless, we observe that the domain of user questions undergoes rapid changes over time, resulting in a significant decrease in RAG performance. Additionally, users have varying expectations for replies. However, current RAG approaches generally overlook the variations in preferred answer domains across different users. This study proposes a method that utilizes both LLM and User Feedback (LUF) to improve RAG performance with shifts in question domains and answer domains. By leveraging feedback obtained from the classic RAG process, LUF can enhance its ability to adjust to variations in questions and user preferences through updates to the retriever and document database. Experiments on five datasets demonstrate that LUF significantly improves the accuracy of the retriever and the responses of the LLM. Compared to baselines, LUF provides more accurate responses aligned with different user preferences.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: Retrieval, Retrieval-augmented generation, Large language models
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 2043
Loading