Keywords: LLM Agent, LLM-based Multi-Agent System
Abstract: LLM-based multi-agent systems (MAS) have emerged as a promising approach to tackle complex tasks that are challenging for a single LLM. A natural way to improve performance is to increase the number of agents. However, prior empirical findings suggest that in homogeneous settings, such scaling yields diminishing returns, whereas introducing heterogeneity (e.g., different models, prompts, or tools) can lead to more substantial gains. This motivates a fundamental question: what limits multi-agent scaling, and why does diversity help? In this paper, we develop an information-theoretic framework suggesting that MAS performance is fundamentally constrained by the intrinsic task uncertainty, rather than increasing monotonically with the number of agents. We derive architecture-agnostic bounds showing that performance improvements depend on the number of effective channels through which the system acquires non-redundant information. Under this view, homogeneous agents saturate early because their outputs are strongly correlated, while heterogeneous agents can provide more complementary evidence. To make this perspective operational, we provide a label-free proxy $\widehat{K}$ that estimates the number of effective channels from semantic similarity patterns in agent outputs.
Empirically, we find that heterogeneous configurations consistently outperform homogeneous scaling, and in our experiments, 2 diverse agents can match or exceed the performance of 16 homogeneous agents. Overall, our results offer a principled perspective on the limits of multi-agent scaling and suggest diversity-aware design as a promising direction for building more efficient MAS. Code and dataset are available at the link: https://github.com/SafeRL-Lab/Agent-Scaling.
Track: Long Paper
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 101
Loading