An Adaptive Entropy-Regularization Framework for Multi-Agent Reinforcement LearningDownload PDF

Anonymous

22 Sept 2022, 12:33 (modified: 16 Nov 2022, 17:28)ICLR 2023 Conference Blind SubmissionReaders: Everyone
Keywords: Multi-Agent Reinforcement Learning, Entropy Regularization, Exploration-Exploitation Tradeoff
TL;DR: This paper proposes an adaptive entropy-regularization framework for multi-agent reinforcement learning to learn the adequate amount of exploration for each agent based on the degree of required exploration.
Abstract: In this paper, we propose an adaptive entropy-regularization framework (ADER) for multi-agent reinforcement learning (RL) to learn the adequate amount of exploration for each agent based on the degree of required exploration. In order to handle instability arising from updating multiple entropy temperature parameters for multiple agents, we disentangle the soft value function into two types: one for pure reward and the other for entropy. By applying multi-agent value factorization to the disentangled value function of pure reward, we obtain a relevant metric to assess the necessary degree of exploration for each agent. Based on this metric, we propose the ADER algorithm based on maximum entropy RL, which controls the necessary level of exploration across agents over time by learning the proper target entropy for each agent. Experimental results show that the proposed scheme significantly outperforms current state-of-the-art multi-agent RL algorithms.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)
13 Replies

Loading