Keywords: Large Language Models, Retrieval-Augmented Reasoning, Reinforcement Learning
Abstract: Recent search-augmented LLMs trained with reinforcement learning (RL) can interleave searching and reasoning for multi-hop reasoning tasks. However, they face two critical failure modes as the accumulating context becomes flooded with both crucial evidence and irrelevant information: (1) deficient search chains that contain incorrect queries or omit retrieval of critical information, and (2) vulnerability to retrieval noise that causes models to misidentify distractors as valid evidence.
To address these challenges, we propose **D²Plan**, a **D**ual-agent **D**ynamic global **Plan**ning paradigm for complex retrieval-augmented reasoning. D²Plan operates through the collaboration of a *Reasoner* and a *Purifier*: the *Reasoner* constructs explicit global plans during reasoning and dynamically adapts them based on retrieval feedback; the *Purifier* assesses retrieval relevance and condenses key information for the *Reasoner*.
We further introduce a two-stage training framework consisting of supervised fine-tuning (SFT) cold-start on synthesized trajectories and RL with plan-oriented rewards to teach LLMs to master the D²Plan paradigm.
Extensive experiments demonstrate that D²Plan enables more coherent multi-step reasoning and stronger resilience to irrelevant information, thereby achieving superior performance on challenging QA benchmarks.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: reasoning, open-domain QA
Languages Studied: English
Submission Number: 5364
Loading