Know the Ropes: A Heuristic Strategy for LLM-based Multi-Agent System Design

ACL ARR 2025 May Submission4830 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Single‑agent LLMs hit hard limits—finite context, role overload, brittle domain transfer. Conventional multi‑agent fixes soften those edges yet expose fresh pains: ill‑posed decompositions, fuzzy contracts, and verification overhead that blunts the gains. We therefore present Know‑The‑Ropes (KtR), a framework that converts domain priors into an algorithmic blueprint hierarchy: tasks are recursively split into typed, controller‑mediated subtasks, each solved zero‑shot or with the lightest viable boost (chain‑of‑thought, micro‑tune, self‑check). Grounded in the No‑Free‑Lunch theorem, KtR trades the chase for a universal prompt for disciplined decomposition. On a Knapsack benchmark (3–8 items) three GPT‑4o‑mini agents raise accuracy from 3\% zero‑shot to 95\% on size‑5 instances after patching a single bottleneck agent. On the tougher Task‑Assignment suite (6–15 jobs) a six‑agent o3‑mini blueprint hits 100\% up to size 10 and $\geq $84\% on sizes 13–15, versus $\leq$11\% zero‑shot. Algorithm‑aware decomposition plus targeted augmentation thus turns modest models into reliable collaborators—no ever‑larger monoliths required.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: LLM/AI agents, fine-tuning, multi-agent system
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 4830
Loading