Coordinated Proximal Policy OptimizationDownload PDF

21 May 2021, 20:42 (modified: 22 Jan 2022, 01:37)NeurIPS 2021 PosterReaders: Everyone
Keywords: Reinforcement Learning, multiagent Learning, Proximal Policy Optimization, StarCraft
TL;DR: Propose coordinated proximal policy optimization that adjusts the step sizes of the policies
Abstract: We present Coordinated Proximal Policy Optimization (CoPPO), an algorithm that extends the original Proximal Policy Optimization (PPO) to the multi-agent setting. The key idea lies in the coordinated adaptation of step size during the policy update process among multiple agents. We prove the monotonicity of policy improvement when optimizing a theoretically-grounded joint objective, and derive a simplified optimization objective based on a set of approximations. We then interpret that such an objective in CoPPO can achieve dynamic credit assignment among agents, thereby alleviating the high variance issue during the concurrent update of agent policies. Finally, we demonstrate that CoPPO outperforms several strong baselines and is competitive with the latest multi-agent PPO method (i.e. MAPPO) under typical multi-agent settings, including cooperative matrix games and the StarCraft II micromanagement tasks.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Code: zip
11 Replies

Loading