Plan-RAG: Planning-guided Retrieval Augmented Generation

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Language Models, Retrieval Augmented Generation, LLM, RAG
TL;DR: Plan-RAG uses a DAG-based reasoning plan to decompose queries, improving attribution and efficiency while leveraging frozen LMs as plug-and-play experts.
Abstract: We introduce Planning-guided Retrieval Augmented Generation (Plan-RAG), a novel framework that augments the retrieve-then-reason paradigm of existing RAG frameworks to plan-then-retrieve. Plan-RAG formulates a reasoning plan as a directed acyclic graph (DAG), decomposing queries into interrelated atomic sub-queries. Answer generation follows the DAG structure, allowing significant gains in efficiency through parallelized retrieval and generation. While state-of-the-art RAG solutions require extensive data generation and fine-tuning of language models (LMs), Plan-RAG incorporates frozen LMs as plug-and-play experts to generate high-quality answers. Compared to existing RAG solutions, Plan-RAG demonstrates significant improvements in reducing hallucinations and bolstering attribution due to its structured sub-query decomposition. Plan-RAG offers a new perspective on integrating external knowledge in LMs while ensuring attribution by design, contributing towards more reliable and interpretable LM-based systems.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10537
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview