Keywords: multi-agent reasoning, chain-of-thought, communication complexity, resource allocation, large reasoning models, theoretical analysis, parallel reasoning, theory of deep learning, transformer expressivity
TL;DR: We develop a theoretical framework for analyzing the expressivity of multi-agent reasoning systems, deriving bounds on communication and agent resources across key algorithmic tasks, and validate these results empirically with large language models.
Abstract: Chain-of-thought prompting has popularized step-by-step reasoning in large language models, yet model performance still degrades as problem complexity and context length grow. By decomposing difficult tasks with long contexts into shorter, manageable ones, recent multi-agent paradigms offer a promising near-term solution to this problem. However, the fundamental capacities of such systems are poorly understood. In this work, we propose a theoretical framework to analyze the expressivity of multi-agent systems. We apply our framework to three algorithmic families: state tracking, recall and multi-hop reasoning. We derive bounds on (i) the number of agents required, (ii) the quantity and structure of inter-agent communication, and (iii) the achievable speedups as problem size and context scale. Our results identify regimes where communication is provably beneficial, delineate tradeoffs between agent count and bandwidth, and expose intrinsic limitations when either resource is constrained. We complement our theoretical analysis with a set of experiments on pretrained LLMs using controlled synthetic benchmarks. Empirical outcomes confirm the tradeoffs between key quantities predicted by our theory. Collectively, our analysis offers principled guidance for designing scalable multi-agent reasoning systems.
Submission Number: 128
Loading