Multi-Agent policy gradients with dynamic weighted value decomposition

Published: 01 Jan 2025, Last Modified: 26 Jul 2025Pattern Recognit. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In real-world multi-agent systems, multiple agents need to coordinate with other agents due to some limitations of observation and communication ability. Multi-agent policy gradient methods recently have witnessed vigorous progress in such challenging settings. However, multi-agent policy gradient methods have scalability and credit assignment issues due to the centralized critic. To solve these issues, a novel Dynamic Weighted QMIX Based Multi-Agent Policy Gradients (DXM) is proposed in this paper, where the idea of dynamic weighted value decomposition is introduced into the framework of multi-agent actor-critic. Based on this idea, the proposed DXM approach has a more general decomposition on centralized critic than existing value decomposition methods, which address the scalability and credit assignment issue in both continuous and discrete action spaces. Briefly, in the presented DXM, deep deterministic policy gradient is employed to learn policies and a single centralized but factored critic, which can decompose the dynamic weighted nonlinear nonmonotonic summation of individual value functions. Empirical evaluations on the discrete action space environment StarCraft multi-agent challenge benchmark and the continuous action space environment continuous predator-prey benchmark show that the DXM approach successfully addresses the scalability and credit allocation issues. DXM significantly outperforms other baselines, with an average win rate improvement of >15 %.
Loading