Planner-R1: Reward Shaping Enables Efficient Agentic RL with Smaller LLMs

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Agentic RL, LLM Planning, Curriculum Learning, Multi-step tool use, SLM as Agent
TL;DR: Planner-R1 hits 56.9 % on TravelPlanner with 180 queries (2.7× GPT-5), sets the strongest open-weight agentic result, and shows reward shaping makes 8B models 3.5× more compute-efficient while still generalizing beyond training.
Abstract: We investigated Agentic RL with large language models on the TravelPlanner benchmark. Our approach, Planner-R1, achieved a 56.9% final-pass rate with only 180 training queries, a 2.7× improvement over GPT-5’s 21.2% baseline and the strongest agentic result on the public leaderboard. A central finding was that smaller models (8B) were highly responsive to reward shaping: with dense process-level signals, they reached competitive performance while being 3.5× more compute-efficient and 1.5× more memory-efficient than 32B models. Larger models were more robust under sparse rewards but exhibited smaller relative gains from shaping and higher variance across runs. While curriculum learning offered no significant benefit, shaped rewards consistently amplified learning dynamics, making 8B models the most efficient setting for agentic RL. Crucially, these gains did not come at the cost of overfitting: fine-tuned models mostly maintained or exceeded baseline performance on out-of-domain tasks, including Multi-IF, NaturalPlan, and Tau-Bench. These results establish reward shaping as a decisive lever for scaling agentic RL, highlight the competitive strength of smaller models, and demonstrate that efficiency can be achieved without sacrificing generalization.
Primary Area: reinforcement learning
Submission Number: 20247
Loading