Keywords: Multi-agent system, Large language models, Algorithmic decomposition
Abstract: Single-agent LLMs face finite context and role overload, while unstructured multi-agent designs can introduce ambiguous roles and coordination overhead. We therefore introduce Know-The-Ropes (KtR), a practical methodology for projecting algorithmic priors and heuristics into typed, controller-mediated multi-agent blueprints for decomposable tasks. KtR follows a multi-step process---identify bottlenecks, refine decomposition, apply minimal augmentation (chain-of-thought, self-check, or light fine-tuning), and verify via contracts. In two case studies, including Knapsack (3--8 items) and Task Assignment (6--15 jobs), we find that KtR by low-effort LLMs can show notable end-to-end accuracy gains over single-agent zero-shot baselines. With three GPT-4o-mini agents, accuracy on size-5 Knapsack instances rises from 3\% to 95\% after addressing a single bottleneck agent. With six o3-mini agents, Task Assignment reaches 100\% up to size 10 and $\geq$84\% on sizes 13--15, versus $\leq$11\% zero-shot. These results indicate benefits in our controlled setting; %KtR complements scaling and prompt/program-of-thought techniques and does not claim universality, performance depends on task decomposability and interface fidelity. These results indicate benefits in our controlled setting; KtR complements scaling and prompt/program-of-thought techniques in building a reliable multi-agent system. An anonymous code base is available at https://anonymous.4open.science/r/KtR-codebase-5638
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 8601
Loading