Metacognitive Retrieval-Augmented Large Language Models

Published: 23 Jan 2024, Last Modified: 23 May 2024TheWebConf24 OralEveryoneRevisionsBibTeX
Keywords: Retrieval-Augmented Generation, Large Language Model, Metacognition
Abstract: Retrieval-augmented language models have become central in natural language processing due to their efficacy in generating precise and relevant content. While traditional methods employ single-time retrieval, more recent approaches have shifted towards multi-time retrieval for complex, multi-hop reasoning tasks. However, current strategies, despite their advancements, are bound by predefined reasoning steps, potentially leading to inaccuracies in response generation. This paper introduces the Metacognitive Retrieval-Augmented Generation framework (MetaRAG), a novel approach that combines the retrieval-augmented generation process with human-inspired metacognition. Drawing from cognitive psychology, metacognition allows an entity to self-reflect and critically evaluate its cognitive processes. By integrating this, MetaRAG enables the model to monitor, evaluate, and plan its response strategies, enhancing its introspective reasoning abilities. Through a three-step metacognitive regulation pipeline, the model assesses the adequacy of its answers, identifies reasons for potential inadequacies, and formulates plans for refinement. Empirical evaluations on multi-hop QA datasets show that MetaRAG significantly outperforms existing methods.
Track: Search
Submission Guidelines Scope: Yes
Submission Guidelines Blind: Yes
Submission Guidelines Format: Yes
Submission Guidelines Limit: Yes
Submission Guidelines Authorship: Yes
Student Author: Yes
Submission Number: 1029
Loading