Keywords: Multi-Agent Collaboration, LLM Agents, Preference Optimization
TL;DR: We train a 2-agent LLM team (one actor-agent and one critic-agent) to collaboratively solve problems.
Abstract: Large language models (LLMs) have demonstrated a remarkable ability to serve as general-purpose tools for various language-based tasks.
Recent works have demonstrated that the efficacy of such models can be improved through iterative dialog between multiple models.
While these
paradigms show promise
in
improving model efficacy, most works in this area treat collaboration as an emergent behavior, rather than a learned behavior.
In doing so, current multi-agent frameworks rely on collaborative behaviors to have been sufficiently trained into off-the-shelf models.
To address this limitation, we propose ACC-Collab, an **A**ctor-**C**riti**c** based learning framework to produce a two-agent team (an actor-agent and a critic-agent) specialized in collaboration.
We demonstrate that ACC-Collab outperforms SotA multi-agent techniques on a wide array of benchmarks.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12819
Loading