Abstract: Relational reasoning has become an important concept in machine learning and has seen notable progress in its methods like graph neural networks, which highlight the value of capturing intricate relational patterns. While it has shown promise in single-agent reinforcement learning, its potential in the multi-agent landscape remains largely uncharted. Our work aims to bridge this gap, demonstrating the advantages of integrating deep relational learning into multi-agent reinforcement learning. We do so by introducing an actor-critic architecture for centralized learning and decentralized execution that uses relational graph neural networks to imbue a spatial inductive bias. Empirical results highlight improved sample efficiency and asymptotic performance against strong baselines in cooperative tasks with significant spatial complexity.
Loading