MACS CODER: A Multi-Agent Coding Framework for Small LMs --- From Fast Thinking to Deep Planning

ACL ARR 2026 January Submission2644 Authors

03 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-Agent Systems, Code Generation, Large Language Models (LLMs), Test-time Compute
Abstract: Current multi-agent coding frameworks are often resource-intensive, employing a one-size-fits-all strategy that lacks efficiency. We propose MACS-Coder (Multi-Agent Adaptive Coding Structure), a dual-process framework inspired by human cognition. It adaptively switches between a Fast Thinking System for rapid, low-cost generation and a Deep Planning System---comprising specialized planning, templating, and debugging agents---for complex tasks. This architecture enables compact open-source models to achieve performance comparable to elite proprietary systems with significantly lower energy consumption. Extensive evaluations show that MACS-Coder achieves new SOTA results: using a gpt-oss-20B backbone, it attains 99.4% on HumanEval, 93.2% on MBPP, and 83.2% on LiveCodeBench V5, consistently outperforming prior methods like CodeSIM and MapCoder. Notably, our 20B-parameter framework matches the performance of top-tier models such as o4-mini and Gemini 2.5 Pro, bridging the gap between small open-source and large closed-source systems. We will open-source our framework and evaluation suite available here: https://anonymous.4open.science/r/MACS-Coder-B161HIIRqq0023
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: NLP Applications,Language Modeling,Efficient/Low-Resource Methods for NLP,Generation
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 2644
Loading