TieComm: Learning a Hierarchical Communication Topology Based on Tie Theory

Published: 01 Jan 2023, Last Modified: 11 Apr 2025DASFAA (1) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Communication plays an important role in Internet of Things that assists cooperation between devices for better resource management. This work considers the problem of learning cooperative policies using communications in Multi-Agent Reinforcement Learning (MARL), which plays an important role to stabilize agent training and improve the policy learned by enabling the agent to capture more information in partially observable environments. Existing studies either adopt a prior topology by experts or learn a communication topology through a costly process. In this work, we optimize the communication mechanism by exploiting both local agent communications and distant agent communications. Our solution is motivated by tie theory in social networks, where strong ties (close friends) communicate differently with weak ties (distant friends). The proposed novel multi-agent reinforcement learning framework named TieComm, learns a dynamic communication topology consisting of inter- and intra-group communication for efficient policy learning. We factorize the joint multi-agent policy into a centralized tie reasoning policy and decentralized conditional action policies of agents, based on which we propose an alternative updating schema to achieve efficient optimization. Experimental results on Level-Based Foraging and Blind-particle Spread demonstrate the effectiveness of our tie theory based RL framework.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview