Keywords: agent, large language model, q-learning, self-training
Abstract: Language agents have become a promising solution to complex interactive tasks. One of the key ingredients to the success of language agents is the reward model on the trajectory of the agentic workflow, which provides valuable guidance during training or inference. However, due to the lack of annotations of intermediate interactions, most existing works use an outcome reward model to optimize policies across entire trajectories. This may lead to sub-optimal policies and hinder the overall performance. To address this, we propose Q\*Agent, leveraging an estimated Q value to generate intermediate annotations for open language agents.
By introducing a reasoning tree and performing process reward modeling, Q\*Agent provides effective intermediate guidance for each step. This guidance aims to automatically annotate data in a step-wise manner.
Besides, we propose a Q-guided generation strategy that can significantly boost model performance by providing process guidance during inference.
Notably, even with almost half the annotated data, Q\*Agent retains strong performance, demonstrating its efficiency in handling limited supervision. We also empirically demonstrate that Q\*Agent can lead to more accurate decision making through qualitative analysis.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 14016
Loading