The Power in Communication: Power Regularization of Communication for Autonomy in Cooperative Multi-Agent Reinforcement Learning

Published: 01 Jun 2024, Last Modified: 24 Jul 2024CoCoMARL 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-Agent Reinforcement Learning, Power Regularization, Communication
TL;DR: A paper that proposes modified power regularization to limit the amount of power agents delegate over a communication channel or protocol
Abstract: Communication plays a vital role for coordination in Multi-Agent Reinforcement Learning (MARL) systems. However, misaligned agents can exploit other agents' trust and delegated power to the communication medium. In this paper, we propose power regularization as a method to limit the adverse effects of communication by misaligned agents. Specifically, we focus on communication which impairs the performance of cooperative agents. Power is a measure of the influence one agent's actions have over another agent's policy. By introducing power regularization over communication, we aim to allow designers to control or reduce an agent's dependency on communication when appropriate. With this capability, we aim to train agent policies with resilience to performance deterioration caused by misuses of the communication channel or communication protocol. We investigate several environments in which power regularization over communication can be valuable to regularizing the power dynamics among agents delegated over the communication medium.
Submission Number: 11
Loading