Coordination Machines for Minimizing Communication in Multi-Agent Reinforcement Learning

Published: 01 Jun 2024, Last Modified: 08 Aug 2024CoCoMARL 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: multi-agent reinforcement learning, multi-agent systems, formal methods
Abstract: Multi-agent systems (MAS) are a promising solution for many real-world problems. However, complete and perfect communications are not guaranteed in the real-world. A key problem in MAS is thus ensuring that agents can meet global specifications while minimizing communication cost. We approach this problem by proposing a hierarchical model that decomposes a global, multi-agent task given by a linear temporal logic (LTL) formula into an equivalent automaton defined over subtasks given as LTL formulae. Our key idea is that each subtask may require different levels of inter-agent communication, and that modeling a global task as the composition of subtasks enables context-aware communication. To solve this problem, we formulate a hierarchical agent which solves an optimization problem to ensure an MAS satisfies a probabilistic task-completion specification while minimizing inter-agent communication. We then develop an algorithm to optimize the hierarchical agent based on the performance of a low-level team of agents that are trained to solve subtasks using multi-agent reinforcement learning (MARL). We then compare our approach to a baseline monolithic task model that approximates a common formulation used in MARL, and show our method provides a significant reduction in communication cost compared to the baseline.
Submission Number: 4
Loading