Keywords: Reinforcement Learning, Multi-agent Systems, Cooperative, Global Convergence, Deep Reinforcement Learning
Abstract: Despite the empirical success of cooperative multi-agent reinforcement learning algorithms in recent years, the theoretical understandings, especially for algorithms under the centralized training with decentralized execution (CTDE) framework, are still lacking. Interestingly, existing algorithms sometimes fail to handle some seemingly simple tasks. Motivated by these failed cases, this paper proposes multi-agent optimistic soft Q-learning (MAOSQL), a new co-MARL algorithm with a global convergence guarantee. The design of MAOSQL includes an optimistic local Q-function and a softmax local policy, which naturally leads to a different objective from existing algorithms. We show that optimizing this objective gives near-optimal policies with a tractable error bound, and MAOSQL provably converges to the global optima with properly chosen hyper-parameters. Further, we show that MAOSQL can be easily modified for deep reinforcement learning, MAOSDQN. We evaluate MAOSDQN in didactic environments where value decomposition methods or policy gradient methods fail, as well as level-based foraging, a popular MARL benchmark. The results confirm our theoretical analysis and indicate the potential of our proposed method to deal with more complicated problems.
Supplementary Material: zip
Primary Area: reinforcement learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6676
Loading