Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement LearningDownload PDF

21 May 2021, 20:44 (modified: 01 Nov 2021, 14:47)NeurIPS 2021 SpotlightReaders: Everyone
Keywords: Extrapolation Error, Offline Reinforcement Learning, Multi-Agent Reinforcement Learning
TL;DR: The first study analyzing and addressing the extrapolation error in multi-agent reinforcement learning.
Abstract: Learning from datasets without interaction with environments (Offline Learning) is an essential step to apply Reinforcement Learning (RL) algorithms in real-world scenarios. However, compared with the single-agent counterpart, offline multi-agent RL introduces more agents with the larger state and action space, which is more challenging but attracts little attention. We demonstrate current offline RL algorithms are ineffective in multi-agent systems due to the accumulated extrapolation error. In this paper, we propose a novel offline RL algorithm, named Implicit Constraint Q-learning (ICQ), which effectively alleviates the extrapolation error by only trusting the state-action pairs given in the dataset for value estimation. Moreover, we extend ICQ to multi-agent tasks by decomposing the joint-policy under the implicit constraint. Experimental results demonstrate that the extrapolation error is successfully controlled within a reasonable range and insensitive to the number of agents. We further show that ICQ achieves the state-of-the-art performance in the challenging multi-agent offline tasks (StarCraft II). Our code is public online at
Supplementary Material: zip
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
28 Replies