Regularized Softmax Deep Multi-Agent Q-LearningDownload PDF

Published: 09 Nov 2021, Last Modified: 14 Jul 2024NeurIPS 2021 PosterReaders: Everyone
Keywords: Multi-Agent Reinforcement Learning, MARL, Value Factorization, Overestimation
Abstract: Tackling overestimation in $Q$-learning is an important problem that has been extensively studied in single-agent reinforcement learning, but has received comparatively little attention in the multi-agent setting. In this work, we empirically demonstrate that QMIX, a popular $Q$-learning algorithm for cooperative multi-agent reinforcement learning (MARL), suffers from a more severe overestimation in practice than previously acknowledged, and is not mitigated by existing approaches. We rectify this with a novel regularization-based update scheme that penalizes large joint action-values that deviate from a baseline and demonstrate its effectiveness in stabilizing learning. Furthermore, we propose to employ a softmax operator, which we efficiently approximate in a novel way in the multi-agent setting, to further reduce the potential overestimation bias. Our approach, Regularized Softmax (RES) Deep Multi-Agent $Q$-Learning, is general and can be applied to any $Q$-learning based MARL algorithm. We demonstrate that, when applied to QMIX, RES avoids severe overestimation and significantly improves performance, yielding state-of-the-art results in a variety of cooperative multi-agent tasks, including the challenging StarCraft II micromanagement benchmarks.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
TL;DR: We propose Regularized Softmax Deep Multi-Agent Q-Learning which effectively reduces overestimation bias, stabilizes learning, and achieves state-of-the-art performance in a variety of cooperative multi-agent tasks.
Supplementary Material: pdf
Code: https://github.com/ling-pan/RES
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2103.11883/code)
7 Replies

Loading