Keywords: Multi-Agent Reinforcement Learning, Group Resilience, Collaboration, Deep Reinforcement Learning
TL;DR: We show that collaboration with other agents is key to achieving group resilience, meaning that collaborating agents adapt better to environment perturbations in multi-agent reinforcement learning (MARL) settings.
Abstract: To safely operate in various dynamic scenarios, AI agents must be robust to unexpected changes in their environment. Previous work on such types of robustness, often called resilience, has focused on single-agent settings. In this work, we introduce a multi-agent variant we call group resilience and formalize this notion. We further hypothesize that collaboration with other agents is key to achieving group resilience, meaning that collaborating agents adapt better to environment perturbations in multi-agent reinforcement learning (MARL) settings. We test our hypothesis empirically by evaluating different collaboration protocols and examining their effect on group resilience. We deployed several MARL algorithms in multiple environments with varying magnitudes of perturbations. Our experiments show that all collaborative approaches lead to greater group resilience compared to their non-collaborative counterparts. Furthermore, our results map the capabilities of the compared collaboration methods in maintaining group resilience.
Submission Number: 20
Loading