Keywords: multi-agent learning, agent coordination protocols, agentic web, a2a, llms as agents
TL;DR: coordination protocol for networks of interacting agents.
Abstract: Modern AI agents can exchange messages using protocols such as A2A and ACP, yet these mechanisms focus on communication rather than coordination. As agent populations grow, this limitation leads to brittle collective behavior, where individually “smart” agents converge on poor group outcomes. We introduce the \emph{Ripple Effect Protocol (REP)}, a coordination protocol in which agents share not only their decisions but also lightweight \emph{sensitivities}—signals that express how their choices would change if key environment variables shifted. These sensitivities ripple through local networks, enabling groups to align faster and more stably than with decision-only communication. We formalize REP's protocol specification, separating required message schemas from optional aggregation rules, and evaluate it across three domains: supply chain information cascades (Beer Game), preference aggregation in sparse networks (Movie Scheduling), and sustainable resource allocation (Fishbanks). Across these experiments, REP consistently improves coordination accuracy and communication efficiency, while flexibly handling both numerical and textual signals from LLM-based agents. By making coordination a protocol-level capability, REP provides scalable infrastructure for the emerging Internet of Agents. The REP SDK will be released with this paper.
Primary Area: infrastructure, software libraries, hardware, systems, etc.
Submission Number: 20904
Loading