Keywords: Multi-agent collaboration, sparsification, LLM agents
Abstract: Recent advancements in large language model (LLM)-powered agents have shown that collective intelligence can significantly outperform individual capabilities, largely attributed to the meticulously designed inter-agent communication topologies. Though impressive in performance, existing multi-agent pipelines inherently introduce substantial token overhead, as well as increased economic costs, which pose challenges for their large-scale deployments. In response to this challenge, we propose an economical, simple, and robust multi-agent communication framework, termed $\texttt{AgentPrune}$, which can seamlessly integrate into mainstream multi-agent systems and prunes redundant or even malicious communication messages. Technically, $\texttt{AgentPrune}$ is the first to identify and formally define the $\textit{Communication Redundancy}$ issue present in current LLM-based multi-agent pipelines, and efficiently performs one-shot pruning on the spatial-temporal message-passing graph, yielding a token-economic and high-performing communication topology.
Extensive experiments across six benchmarks demonstrate that $\texttt{AgentPrune}$ $\textbf{(I)}$ achieves comparable results as state-of-the-art topologies at merely $\\$5.6$ cost compared to their $\\$43.7$, $\textbf{(II)}$ integrates seamlessly into existing multi-agent frameworks with $28.1\\%\sim72.8\\%\downarrow$ token reduction, and $\textbf{(III)}$ successfully defend against two types of agent-based adversarial attacks with $3.5\\%\sim10.8\\%\uparrow$ performance boost. The source code is available at \url{https://github.com/yanweiyue/AgentPrune}.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2208
Loading