Reinforcing Agentic Search Via Reward Density Optimization

ICLR 2026 Conference Submission278 Authors

01 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: tool-integrated reasoning, large language models
Abstract: Reinforcement Learning with Verifiable Rewards (RLVR) is a promising approach for enhancing agentic deep search. However, its application is often hindered by low **Reward Density** in deep search scenarios, where agents expend significant exploratory costs for infrequent and often null final rewards. In this paper, we formalize this challenge as the **Reward Density Optimization** problem, which aims to improve the reward obtained per unit of exploration cost. We introduce **InfoFlow**, a systematic framework that tackles this problem from three aspects. 1) **Subproblem decomposition**: breaking down long-range tasks to assign process rewards, thereby providing denser learning signals. 2) **Failure-guided hints**: injecting corrective guidance into stalled trajectories to increase the probability of successful outcomes. 3) **Dual-agent refinement**: employing a dual-agent architecture to offload the cognitive burden of deep exploration. A refiner agent synthesizes the search history, which effectively compresses the researcher's perceived trajectory, thereby reducing exploration cost and increasing the overall reward density. We evaluate InfoFlow on multiple agentic search benchmarks, where it significantly outperforms strong baselines, enabling lightweight LLMs to achieve performance comparable to advanced proprietary LLMs.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 278
Loading