The StarCraft Multi-Agent Challenges+ : Learning of Sub-tasks and Environmental Benefits without Precise Reward FunctionsDownload PDF

Published: 11 Jul 2022, Last Modified: 25 Nov 2024AI4ABM 2022 SpotlightReaders: Everyone
Keywords: Benchmark, Multi-agent reinforcement learning
TL;DR: Benchmark;
Abstract: In this paper, we propose a novel benchmark called SMAC$^{+}$, where agents are supposed to implicitly learn a way to complete sub-tasks or to use environmental benefits without precise reward functions. The StarCraft Multi-Agent Challenges (SMAC) recognized as a standard benchmark of Multi-Agent Reinforcement Learning (MARL) is mainly concerned with ensuring that all agents cooperatively eliminate approaching adversaries only through fine manipulation with obvious reward functions. SMAC$^{+}$, on the other hand, is interested in the ability of MARL algorithms to effectively discover sub-tasks. In the offensive scenarios, agents must learn to find opponents first and then eliminate them, and the defensive scenarios need agents to use topographic features such as placing behind structures to lower the possibility of being attacked by enemies. In these scenarios, MARL algorithms must learn indirectly how they accomplish sub-tasks without direct incentives. We investigate MARL algorithms under SMAC$^{+}$ and observe that recent approaches work well in similar settings to the previous challenges but misbehave in offensive scenarios, even when training time is significantly extended. We also discover that risk-based extra exploration approach has a positive effect on performance through the completion of sub-tasks.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/the-starcraft-multi-agent-challenges-learning/code)
0 Replies

Loading