Cooperative Information Dissemination with Graph-based Multi-Agent Reinforcement Learning

Published: 01 Jun 2024, Last Modified: 17 Jun 2024CoCoMARL 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-Agent Reinforcement Learning, Reinforcement Learning, Graph Neural Networks, Information Dissemination, Communication Networks
Abstract: Efficient information dissemination is crucial for supporting critical operations across domains like disaster response, autonomous vehicles, and sensor networks. This paper introduces a Multi-Agent Reinforcement Learning (MARL) approach as a significant step forward in achieving more decentralized, efficient, and collaborative information dissemination. We propose a Partially Observable Stochastic Game (POSG) formulation for information dissemination empowering each agent to decide on message forwarding independently, based on the observation of their one-hop neighborhood. This constitutes a significant paradigm shift from heuristics currently employed in real-world broadcast protocols. Our novel approach harnesses Graph Convolutional Reinforcement Learning and Graph Attention Networks (GATs) with dynamic attention to capture essential network features. We propose two approaches to accomplish cooperative information dissemination, L-DyAN and HL-DyAN, differing in terms of the information exchanged among agents. Our experimental results show that our trained policies outperform existing methods, including the state-of-the-art heuristic, in terms of network coverage and communication overhead on dynamic networks of varying density and behavior.
Submission Number: 23
Loading