Evaluate, Scale, and Credit: A Comprehensive Study on Multi-Agent Collaboration of Large Language ModelsDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Large Language Models based Multi-Agent Systems (LLM-MAS) perform well in many domains, but we still lack a clear understanding of the collaboration mechanism among multiple LLM-based agents. This study aims to explore three key issues: (1) Can multi-agent outperform single-agent systems? (2) Is scaling better for multi-agent systems? (3) How to credit agents and optimize collaboration? Specifically, we design five collaboration architectures and evaluate their effectiveness across different LLMs and tasks. Our findings offer significant insights for understanding the collaboration within MAS, optimizing collaboration architectures among agents, and reducing system costs. Furthermore, our conclusion will inspire and provide new perspectives for future studies on LLM-MAS.
Paper Type: long
Research Area: NLP Applications
Contribution Types: NLP engineering experiment, Data analysis
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview