Everyone Contributes! Incentivizing Strategic Cooperation in Multi-LLM Systems via Sequential Public Goods Games
Keywords: Multi‑LLM Collaboration, Public Goods Games, Reinforcement Learning
TL;DR: We propose MAC-SPGG, a game-theoretic RL framework that incentivizes scalable and robust cooperation among LLM agents via sequential public goods games.
Abstract: Coordinating multiple large language models (LLMs) to solve complex tasks collaboratively poses a fundamental trade‑off between the computation costs and collective performance compared with individual model. We introduce a novel, game‑theoretically grounded reinforcement learning (RL) framework, the Multi-Agent Cooperation Sequential Public Goods Game (MAC-SPGG), to systematically incentivize cooperation in multi‑LLM ensembles. In MAC-SPGG, LLM agents move in sequence, observing predecessors’ outputs and updating beliefs to condition their own contributions. By redesigning the public‑goods reward, effortful contributions become the unique Subgame Perfect Nash Equilibrium (SPNE), which eliminates free‑riding under traditional SPGG or PGG. Its sequential protocol replaces costly round‑based information exchanges with a streamlined decision flow, cutting communication overhead while retaining strategic depth. We prove the existence and uniqueness of the SPNE under realistic parameters, and empirically show that MAC-SPGG-trained ensembles outperform single‑agent baselines, chain‑of‑thought prompting, and other cooperative methods, even achieving comparable performance to large-scale models across reasoning, math, code generation, and NLP tasks. Our results highlight the power of structured, incentive-aligned MAC-SPGG cooperation for scalable and robust multi-agent language generation.
Area: Generative and Agentic AI (GAAI)
Generative A I: I acknowledge that I have read and will follow this policy.
Submission Number: 386
Loading