Reinforcing Multi-Turn Reasoning in LLM Agents via Turn-Level Reward Design and Credit Assignment

Published: 04 Nov 2025, Last Modified: 04 Nov 2025MTI-LLM @ NeurIPS 2025 PosterEveryoneRevisionsBibTeXCC BY-ND 4.0
Keywords: Reinforcement Learning, LLM Agent, Multi-Turn Interaction
Abstract: This paper investigates approaches to enhance the reasoning capabilities of Large Language Model (LLM) agents using Reinforcement Learning (RL). Specifically, we focus on long-horizon multi-turn agent scenarios, which can be naturally modeled as Markov Decision Processes. Although popular RL algorithms such as Group Relative Policy Optimization (GRPO) and Proximal Policy Optimization (PPO) have been widely applied to train multi-turn LLM agents, they typically rely only on sparse final rewards and lack dense intermediate signals across multiple decision steps, limiting their performance on complex reasoning tasks. To address this, we introduce a \textit{fine-grained turn-level credit assignment} strategy to enable more effective process-level supervision in multi-turn agent interactions. By incorporating well-designed \textit{turn-level rewards}, we extend GRPO and PPO to their multi-turn variants that better guide LLM agents at each round of interaction. Our case studies on multi-turn reasoning-augmented search tasks demonstrate that RL algorithms augmented with fine-grained credit assignment significantly improve the performance of LLM agents compared with baselines. Evaluated on diverse question-answering datasets with 7B models, the training and validation reward curves illustrate that our method achieves \textit{greater stability}, \textit{faster convergence}, and \textit{higher accuracy} across multiple runs.
Submission Number: 161
Loading