re-thinking: In-depth Interactive Thinking with Retrieved Knowledge for Large Language ModelsDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: The hallucinations of large language models (LLMs) have the potential to be solved by Retrieval-Augmented Generation (RAG), which incorporates external knowledge during the generation process. Although effective, incorrectly retrieved knowledge uncontrollably carries rich noise, which damages RAG performance. In this paper, we propose a simple yet highly effective prompting strategy: re-thinking. Drawn inspiration from how humans selectively learn with external knowledge, re-thinking considers the retrieved knowledge cannot be treated equally, which means selectively retaining and removing knowledge. To gather insightful and comprehensive selection process, additionally, we develop a fine-grained and in-depth interaction mechanism, which equips knowledge with queries again, making them have richer, back-and-forth interactions, obtaining fine-grained correlation or slight differences. Experiments conducted on various reasoning benchmarks and LLMs demonstrate the effectiveness of the proposed re-thinking framework.
Paper Type: long
Research Area: Question Answering
Contribution Types: Model analysis & interpretability
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview