Rethinking Individual Global Max in Cooperative Multi-Agent Reinforcement LearningDownload PDF

Published: 31 Oct 2022, Last Modified: 09 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Individual Global Max, Cooperative Multi Agent Reinforcement Learning, Value Decomposition, Imitation Learning, Data Aggregation
TL;DR: This paper proposes to introduce imitation learning into hypernetwork-based value decomposition to avoid error accumulation after revealing that Individual Global Max is a lossy decomposition.
Abstract: In cooperative multi-agent reinforcement learning, centralized training and decentralized execution (CTDE) has achieved remarkable success. Individual Global Max (IGM) decomposition, which is an important element of CTDE, measures the consistency between local and joint policies. The majority of IGM-based research focuses on how to establish this consistent relationship, but little attention has been paid to examining IGM's potential flaws. In this work, we reveal that the IGM condition is a lossy decomposition, and the error of lossy decomposition will accumulated in hypernetwork-based methods. To address the above issue, we propose to adopt an imitation learning strategy to separate the lossy decomposition from Bellman iterations, thereby avoiding error accumulation. The proposed strategy is theoretically proved and empirically verified on the StarCraft Multi-Agent Challenge benchmark problem with zero sight view. The results also confirm that the proposed method outperforms state-of-the-art IGM-based approaches.
Supplementary Material: pdf
21 Replies

Loading