Wasserstein-Barycenter Consensus for Cooperative Multi-Agent Reinforcement Learning

Published: 23 Jun 2025, Last Modified: 25 Jun 2025CoCoMARL 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-Agent Reinforcement Learning, Optimal Transport, Wasserstein Barycenter
TL;DR: We propose a MARL consensus method that regularizes agent policies with a penalty based on their Sinkhorn divergence from a shared Wasserstein barycenter of their visitation measures.
Abstract: Cooperative multi-agent reinforcement learning (MARL) demands principled mechanisms to align heterogeneous policies while preserving the capacity for specialized behavior. We introduce a novel consensus framework that defines the team strategy as the entropic-regularized p-Wasserstein barycenter of agents’ joint state–action visitation measures. By augmenting each agent’s policy objective with a soft penalty proportional to its Sinkhorn divergence from this barycenter, the proposed approach encourages coherent group behavior without enforcing rigid parameter sharing. We derive an algorithm that alternates between Sinkhorn-barycenter computation and policy-gradient updates, and we prove that under standard Lipschitz and compactness assumptions the maximal pairwise policy discrepancy contracts at a geometric rate. Empirical evaluation for a cooperative navigation case study demonstrates that our OT-barycenter consensus outperforms an independent learners baseline in convergence speed and final coordination success.
Submission Number: 3
Loading