Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning

Published: 05 May 2023, Last Modified: 05 May 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Learning in multi-agent systems is highly challenging due to several factors including the non-stationarity introduced by agents' interactions and the combinatorial nature of their state and action spaces. In particular, we consider the Mean-Field Control (MFC) problem which assumes an asymptotically infinite population of identical agents that aim to collaboratively maximize the collective reward. In many cases, solutions of an MFC problem are good approximations for large systems, hence, efficient learning for MFC is valuable for the analogous discrete agent setting with many agents. Specifically, we focus on the case of unknown system dynamics where the goal is to simultaneously optimize for the rewards and learn from experience. We propose an efficient model-based reinforcement learning algorithm, $\text{M}^3$--UCRL, that runs in episodes, balances between exploration and exploitation during policy learning, and provably solves this problem. Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC, obtained via a novel mean-field type analysis. To learn the system’s dynamics, $\text{M}^3$--UCRL can be instantiated with various statistical models, e.g., neural networks or Gaussian Processes. Moreover, we provide a practical parametrization of the core optimization problem that facilitates gradient-based optimization techniques when combined with differentiable dynamics approximation methods such as neural networks.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Based on the reviews and discussions, we revised the papers and made the following changes * Made a remark at the end of Section 2 (Problem Statement) that compares our problem setting to Lauriere et al. (2022) * Moved the pseudoalgorithm to Section 3 (The $M^3-UCRL$ Algorithm) from the Appendix * Elaborated on the regret bound's interpretation after Theorem 1 * Added the swarm motion experiments from previous submissions to the Appendix and refer to it in Section 5 (Experiments) * Added further comments on the model-free comparison to the end of Section 5 (Experiments) A difference file highlighting the changes has been added to the supplementary material. Text highlighted in blue shows new text in the paper and text highlighted in red shows removed text.
Supplementary Material: zip
Assigned Action Editor: ~Lihong_Li1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 808
Loading