Keywords: Large Language Models, Retrieval-Augmented Generation, Reinforcement Learning, Reasoning
TL;DR: We propose REX-RAG, a retrieval-augmented RL framework for LLM reasoning, featuring a Mixed Sampling Strategy to escape dead ends and a Policy Correction Mechanism to correct distribution shift and reduce gradient bias.
Abstract: Reinforcement learning (RL) is emerging as a powerful paradigm for enabling large language models (LLMs) to perform complex reasoning tasks. Recent advances indicate that integrating RL with retrieval-augmented generation (RAG) allows LLMs to dynamically incorporate external knowledge, leading to more informed and robust decision making. However, we identify a critical challenge during policy-driven trajectory sampling: LLMs are frequently trapped in unproductive reasoning paths, which we refer to as "dead ends", committing to overconfident yet incorrect conclusions. This severely hampers exploration and undermines effective policy optimization. To address this challenge, we propose **REX-RAG** (**R**easoning **EX**ploration with Policy Realignment in **R**etrieval-**A**ugmented **G**eneration), a novel framework that explores alternative reasoning paths while maintaining rigorous policy learning through principled distributional corrections. Our approach introduces two symbiotic innovations: **(1) Mixed Sampling Strategy**, which combines a novel probe sampling method with exploratory prompts to escape dead ends; and **(2) Policy Correction Mechanism**, which is essential for correcting the distributional shifts introduced by exploration. REX-RAG demonstrates that effective exploration is only viable when paired with such a rigorous correction. We evaluate it on seven question-answering benchmarks, and the experimental results show that REX-RAG achieves average performance gains of **5.1%** on Qwen2.5-3B and **3.6%** on Qwen2.5-7B over strong baselines, demonstrating competitive results across multiple datasets. Anonymous repository is provided on https://anonymous.4open.science/r/REX-RAG.
Supplementary Material: pdf
Primary Area: foundation or frontier models, including LLMs
Submission Number: 6827
Loading