Keywords: large language model, agent, reinforcement learning, process reward model
TL;DR: A new "Reward Rising Optimization" method trains AI agents more efficiently by only collecting data when rewards increase between steps.
Abstract: Large language models (LLMs) have exhibited extraordinary performance in a variety of tasks, while it remains challenging for them to solve complex multi-step tasks as agents. In practice, agents are sensitive to the outcome of certain key steps, which makes them likely to fail the task because of a subtle mistake in the planning trajectory. Recent approaches resort to calibrating the reasoning process through reinforcement learning. They reward or penalize every reasoning step with process supervision, known as Process Reward Models (PRMs). However, PRMs are difficult and costly to scale up with a large number of next action candidates since they require extensive computations to acquire the training data through per-step trajectory exploration. To mitigate this issue, we focus on the relative reward trend across successive reasoning steps and propose maintaining an increasing reward in the collected trajectories for process supervision, which we term Reward Rising Optimization (RRO). Specifically, we incrementally augment the process supervision until we identify a step exhibiting positive reward differentials, i.e., rising rewards, relative to its preceding iteration. This method dynamically expands the search space for the next action candidates, efficiently capturing high-quality data. We provide mathematical groundings and empirical results on the WebShop and InterCode-SQL benchmarks, showing that our proposed RRO method achieves superior performance while requiring much less exploration cost.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 562
Loading