Q-RAG: Long Context Multi‑Step Retrieval via Value‑Based Embedder Training

Published: 26 Jan 2026, Last Modified: 08 Mar 2026ICLR 2026 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, RL, QA, Long-context, RAG, NLP
Abstract: Retrieval-Augmented Generation (RAG) methods enhance LLM performance by efficiently filtering relevant context for LLMs, reducing hallucinations and inference cost. However, most existing RAG methods focus on single-step retrieval, which is often insufficient for answering complex questions that require multi-step search. Recently, multi-step retrieval approaches have emerged, typically involving the fine-tuning of small LLMs to perform multi-step retrieval. This type of fine-tuning is highly resource-intensive and does not enable the use of larger LLMs. In this work, we propose Q-RAG, a novel approach that fine-tunes the Embedder model for multi-step retrieval using reinforcement learning (RL). Q-RAG offers a competitive, resource-efficient alternative to existing multi-step retrieval methods for open-domain question answering and achieves state-of-the-art results on the popular long-context benchmarks BabiLong and RULER for contexts up to 10M tokens. Code is available at: https://github.com/griver/Q-RAG.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 25302
Loading