Keywords: Retrieval-Augmented Generation, Agentic RAG, Reinforcement Learning, Multi-Hop Question Answering
TL;DR: We propose Path-GRPO, a framework using path-centric process rewards to train agentic RAG systems, achieving 11% accuracy improvement over existing methods by evaluating reasoning trajectory quality rather than just final answers.
Abstract: Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by incorporating external knowledge, yet traditional single-round retrieval struggles with complex multi-step reasoning.
Agentic RAG addresses this by enabling LLMs to dynamically decide when and what to retrieve, but current RL-based training methods suffer from sparse outcome rewards that discard intermediate signals and low sample efficiency where failed samples contribute nothing.
We propose Search-P1, a framework that introduces path-centric reward shaping for agentic RAG training, comprising two key components: (1) Path-Centric Reward, which evaluates the structural quality of reasoning trajectories through order-agnostic step coverage and soft scoring that extracts learning signals even from failed samples, and (2) Dual-Track Path Scoring with offline-generated reference planners that assesses paths from both self-consistency and reference-alignment perspectives.
Experiments on multiple QA benchmarks demonstrate that Search-P1 achieves significant improvements over Search-R1 and other strong baselines, with an average accuracy gain of 7.7 points.
Submission Type: Deployed
Copyright Form: pdf
Submission Number: 120
Loading