Causal Mean Field Multi-Agent Reinforcement LearningDownload PDF


22 Sept 2022, 12:38 (modified: 17 Nov 2022, 07:25)ICLR 2023 Conference Blind SubmissionReaders: Everyone
Keywords: multi-agent reinforcement mearning, causal inference
TL;DR: This paper aims at the scalability problem in large-scale multi-agent system. We use causal inference to imporve the robustness of mean field Q-learning. Experiments verify that our method achieve superior scalability performance.
Abstract: Scalability remains a challenge in multi-agent reinforcement learning and is currently under active research. However, existing works lack the ability to identify the essential interaction under the non-stationary environment. We propose causal mean field Q-learning (CMFQ) to address this problem. It has the advantage of MFQ, which can compress the space size dramatically. Besides, it is ever more robust toward the non-stationary caused by increasing agents. We enable agents to identify which ally or opponent is more crucial by asking "what if" with the help of the structural causal model (SCM), then pay more attention to more crucial ones. We test CMFQ in mixed cooperative-competitive and cooperative games, which verify our method's scalability performance.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)
7 Replies