Keywords: Language Models, Retrieval Augmented Generation, LLM, RAG, Reasoning
TL;DR: Plan$^\ast$RAG generates test-time reasoning plans as DAGs, enabling parallel retrieval and systematic verification for improved accuracy in RAG systems
Abstract: We introduce Plan$^\ast$RAG, a novel framework that enables structured multi-hop reasoning in retrieval-augmented generation (RAG) through test-time reasoning plan generation. While existing approaches such as ReAct maintain reasoning chains within the language model's context window, we observe that this often leads to plan fragmentation and execution failures. Our key insight is that by isolating the reasoning plan as a directed acyclic graph (DAG) outside the LM's working memory, we can enable *(1)* systematic *exploration* of reasoning paths, *(2)* *atomic* subqueries enabling precise retrievals and grounding, and *(3)* *efficiency* through parallel execution and bounded context window utilization. Moreover, Plan$^\ast$RAG's modular design allows it to be integrated with existing RAG methods, thus providing a practical solution to improve current RAG systems. On standard multi-hop reasoning benchmarks, Plan$^\ast$RAG consistently achieves improvements over recently proposed methods such as RQ-RAG and Self-RAG, while maintaining comparable computational costs.
Submission Number: 78
Loading