UR$^2$: Unify RAG and Reasoning through Reinforcement Learning

ICLR 2026 Conference Submission182 Authors

01 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, RAG, Reinforcement Learning
TL;DR: We propose a unified framework that integrates retrieval and reasoning via Reinforcement Learning, achieving state-of-the-art performance across QA, medical, and mathematical reasoning benchmarks.
Abstract: Large Language Models (LLMs) have shown remarkable capabilities through two complementary paradigms: Retrieval-Augmented Generation (RAG), which enhances knowledge grounding, and Reinforcement Learning from Verifiable Rewards (RLVR), which optimizes complex reasoning abilities. However, these two capabilities are often developed in isolation, and existing efforts to unify them remain narrow in scope---typically limited to open-domain QA with fixed retrieval settings and task-specific constraints. This lack of integration constrains generalization and limits the applicability of RAG-RL methods to broader domains. To bridge this gap, we propose **UR$^2$** (**U**nified **R**AG and **R**easoning), a general framework that unifies retrieval and reasoning through Reinforcement Learning. UR$^2$ introduces two key contributions: a difficulty-aware curriculum training that selectively invokes retrieval only for challenging problems, and a hybrid knowledge access strategy combining domain-specific offline corpora with LLM-generated summaries. These components are designed to enable dynamic coordination between retrieval and reasoning, improving adaptability across a diverse range of tasks. Experiments across open-domain QA, MMLU-Pro, medical, and mathematical reasoning tasks demonstrate that UR$^2$ (built on Qwen-2.5-3/7B and LLaMA-3.1-8B) significantly outperforms existing RAG and RL methods, achieving comparable performance to GPT-4o-mini and GPT-4.1-mini on several benchmarks. We will release all code, models, and data upon submission.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 182
Loading