Learning to Share in Multi-Agent Reinforcement LearningDownload PDF

Published: 21 Apr 2022, Last Modified: 22 Oct 2023Cells2Societies 2022 OralReaders: Everyone
Keywords: Paper Track, Cooperative Multi-Agent Reinforcement Learning, Networked Multi-Agent Reinforcement Learning, Sharing
TL;DR: We propose a hierarchically decentralized learning framework for networked MARL that enables agents to learn to dynamically share reward with neighbors so as to collaboratively optimize the global objective.
Abstract: In this paper, we study the problem of networked multi-agent reinforcement learning (MARL), where a number of agents are deployed as a partially connected network and each interacts only with nearby agents. Networked MARL requires all agents make decisions in a decentralized manner to optimize a global objective with restricted communication between neighbors over the network. Inspired by the fact that sharing plays a key role in human's learning of cooperation, we propose LToS, a hierarchically decentralized MARL framework that enables agents to learn to dynamically share reward with neighbors so as to encourage agents to cooperate on the global objective through collectives. For each agent, the high-level policy learns how to share reward with neighbors to decompose the global objective, while the low-level policy learns to optimize local objective induced by the high-level policies in the neighborhood. The two policies form a bi-level optimization and learn alternately. We empirically demonstrate that LToS outperforms existing methods in both social dilemma and networked MARL scenario across scales.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2112.08702/code)
0 Replies