Leveraging Joint-Action Embedding in Multiagent Reinforcement Learning for Cooperative Games

Published: 01 Jan 2024, Last Modified: 16 Oct 2025IEEE Trans. Games 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: State-of-the-art multiagent policy gradient (MAPG) methods have demonstrated convincing capability in many cooperative games. However, the exponentially growing joint-action space severely challenges the critic's value evaluation and hinders performance of MAPG methods. To address this issue, we augment Central-Q policy gradient with a joint-action embedding function and propose mutual-information maximization MAPG (M3APG). The joint-action embedding function makes joint-actions contain information of state transitions, which will improve the critic's generalization over the joint-action space by allowing it to infer joint-actions' outcomes. We theoretically prove that with a fixed joint-action embedding function, the convergence of M3APG is guaranteed. Experiment results of the StarCraft multiagent challenge (SMAC) demonstrate that M3APG gives evaluation results with better accuracy and outperform other MAPG basic models across various maps of multiple difficulty levels. We empirically show that our joint-action embedding model can be extended to value-based multiagent reinforcement learning methods and state-of-the-art MAPG methods. Finally, we run an ablation study to show that the usage of mutual information in our method is necessary and effective.
Loading