Robust Coordination under Misaligned Communication via Power Regularization

Published: 23 Jun 2025, Last Modified: 25 Jun 2025CoCoMARL 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-agent reinforcement learning, MARL safety, Robust coordination, Influence in multi-agent systems, Adversarial Communication, Influence regularization, Adversarial Robustness, Learning to communicate
Abstract: Effective communication in Multi-Agent Reinforcement Learning (MARL) can significantly enhance coordination and collaborative performance in complex and partially observable environments. However, reliance on communication can also introduce vulnerabilities when agents are misaligned, potentially leading to adversarial interactions that exploit implicit assumptions of cooperative intent. Prior work has addressed adversarial behavior through power regularization by controlling the influence one agent exerts over another, but has largely overlooked the role of communication in these dynamics. This paper introduces communicative power regularization (CPR), which extends power regularization specifically to communication channels. By explicitly quantifying and constraining agents' communicative influence during training, CPR actively mitigates vulnerabilities arising from misaligned or adversarial communications. Evaluations in the Grid Coverage benchmark environment demonstrate that our approach significantly enhances robustness to adversarial communication while preserving cooperative performance, offering a practical framework for secure and resilient cooperative MARL systems.
Submission Number: 16
Loading