Abstract: The hallucinations of large language models (LLMs) have the potential to be solved by Retrieval-Augmented Generation (RAG), which incorporates external knowledge during the generation process. Although effective, incorrectly retrieved knowledge uncontrollably carries rich noise, which damages RAG performance. In this paper, we propose a simple yet highly effective prompting strategy: re-thinking. Drawn inspiration from how humans selectively learn with external knowledge, re-thinking considers the retrieved knowledge cannot be treated equally, which means selectively retaining and removing knowledge. To gather insightful and comprehensive selection process, additionally, we develop a fine-grained and in-depth interaction mechanism, which equips knowledge with queries again, making them have richer, back-and-forth interactions, obtaining fine-grained correlation or slight differences. Experiments conducted on various reasoning benchmarks and LLMs demonstrate the effectiveness of the proposed re-thinking framework.
Paper Type: long
Research Area: Question Answering
Contribution Types: Model analysis & interpretability
Languages Studied: English
0 Replies
Loading