Promoting Coordination through Policy Regularization in Multi-Agent Deep Reinforcement LearningDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: Reinforcement Learning, Multi-Agent, Continuous Control, Regularization, Coordination, Inductive biases
TL;DR: We propose regularization objectives for multi-agent RL algorithms that foster coordination on cooperative tasks.
Abstract: A central challenge in multi-agent reinforcement learning is the induction of coordination between agents of a team. In this work, we investigate how to promote inter-agent coordination using policy regularization and discuss two possible avenues respectively based on inter-agent modelling and synchronized sub-policy selection. We test each approach in four challenging continuous control tasks with sparse rewards and compare them against three baselines including MADDPG, a state-of-the-art multi-agent reinforcement learning algorithm. To ensure a fair comparison, we rely on a thorough hyper-parameter selection and training methodology that allows a fixed hyper-parameter search budget for each algorithm and environment. We consequently assess both the hyper-parameter sensitivity, sample-efficiency and asymptotic performance of each learning method. Our experiments show that the proposed methods lead to significant improvements on cooperative problems. We further analyse the effects of the proposed regularizations on the behaviors learned by the agents.
Code: https://drive.google.com/file/d/1BvclzmkIgvDieov96YSBAj24Q2VXQM2w/view?usp=sharing
Original Pdf: pdf
15 Replies

Loading