Symbolic Guidance for LLM Agents in Distributed Multiagent Coordination
Keywords: Large Language Models, Multiagent Systems, Distributed Coordination, Symbolic Guidance, Adjustable Autonomy
TL;DR: We introduce the Symbolic Guidance Taxonomy (SGT), a framework that maps how different forms of symbolic instruction shape LLM agents’ autonomy in distributed coordination
Abstract: Large language models (LLMs) are increasingly deployed as autonomous agents within multi-agent systems. Yet, their capacity to execute distributed coordination protocols remains poorly understood. Recently, researchers introduced AgentsNet, a framework that enables LLM agents to coordinate in a distributed manner to solve a range of multi-agent coordination problems. However, because this approach grants agents full flexibility in reasoning about their coordination strategies, it often yields inconsistent or poor performance, particularly in more complex domains.
In this paper, we hypothesize that LLM agents can achieve better coordination when provided with symbolic guidance derived from established symbolic algorithms. To test this hypothesis, we systematically evaluate how different forms of guidance influence performance across three canonical distributed graph-based problems -- graph coloring, matching, and vertex cover. For each problem, we examine two variants: A simpler one where feasible solutions can be found using local, agent-based heuristics, and a more complex one where optimal solutions require global coordination.
To structure this investigation, we introduce the Symbolic Guidance Taxonomy (SGT), which defines a spectrum of guidance ranging from the AgentsNet baseline -- using only freeform natural-language task descriptions with no explicit guidance -- at one end, to complete algorithmic specifications at the other, with intermediate levels incorporating partial pseudocode. Our results reveal that intermediate guidance levels are most effective: Partial pseudocode guidance consistently outperforms the unguided AgentsNet baseline. Moreover, both the choice of model and source of symbolic guidance play important roles. Gemini agents generally achieve the best overall performance; Llama agents often benefit from guidance; whereas Qwen agents tend to be largely insensitive to it. Similarly, guidance derived from local heuristic algorithms proves broadly robust, while guidance from complex global search algorithms tends to be less effective. Collectively, these findings offer design principles for balancing symbolic structure and LLM adaptability in distributed multi-agent coordination.
Area: Generative and Agentic AI (GAAI)
Generative A I: I acknowledge that I have read and will follow this policy.
Submission Number: 828
Loading