s3: You Don't Need That Much Data to Train a Search Agent via RL

ACL ARR 2025 May Submission1375 Authors

17 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Retrieval-augmented generation (RAG) systems empower large language models (LLMs) to access external knowledge during inference. Recent advances have enabled LLMs to act as search agents via reinforcement learning (RL), improving information acquisition through multi-turn interactions with retrieval engines. However, existing approaches either optimize retrieval using search-only metrics (e.g., NDCG) that ignore downstream utility or fine-tune the entire LLM to jointly reason and retrieve—entangling retrieval with generation and limiting the real search utility and compatibility with frozen or proprietary models. In this work, we propose \texttt{s3}, a lightweight, model-agnostic framework that decouples the searcher from the generator and trains the searcher using a Gain Beyond RAG reward: the improvement in generation accuracy over naïve RAG. \texttt{s3} requires only 2.4k training samples to outperform baselines trained on over 70$\times$ more data, consistently delivering stronger downstream performance across six general QA and five medical QA benchmarks.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: LLM/AI agents, retrieval-augmented generation, applications
Contribution Types: NLP engineering experiment, Reproduction study, Approaches to low-resource settings, Publicly available software and/or pre-trained models
Languages Studied: English
Keywords: LLM/AI agents, retrieval-augmented generation, applications
Submission Number: 1375
Loading