Keywords: Multi-agent systems, Large language models, Credit assignment, Shapley values, Process reward models
TL;DR: We propose a theoretical framework that transforms system-level evaluations in multi-LLM systems into fair, credit-conserving, and repair-aware training signals, bridging the gap between global outcomes and local post-training supervision.
Abstract: Large Language Models (LLMs) in multi-agent systems (MAS) have shown promise for complex tasks, yet current training methods lack principled ways to connect system-level evaluation with agent- and message-level learning. We propose a theoretical framework that unifies cooperative game–theoretic attribution with process reward modeling to transform \emph{$\text{system evaluation} \rightarrow \text{agent credit} \rightarrow \text{response-level signals}$}. Unlike prior approaches that rely only on attribution (Shapley) or step-level labels (PRM), our method produces local, signed, and credit-conserving signals. In success cases, Shapley-based credit assignment fairly allocates outcomes across agents and is refined into per-message rewards that promote cooperation while discouraging redundancy or sabotage; in failure cases, first-error localization yields repair-aware preferences that penalize harmful steps while rewarding corrective attempts. The resulting signals are bounded, cooperative, and directly compatible with reinforcement- or preference-based post-training, providing a unified and auditable pathway from global evaluation to local supervision in LLM multi-agent training. Our contribution is conceptual: we present a theoretical foundation and training signals, leaving empirical validation for future work.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 21595
Loading