Value-based CTDE Methods in Symmetric Two-team Markov Game: from Cooperation to Team CompetitionDownload PDF

08 Oct 2022 (modified: 03 Nov 2024)Deep RL Workshop 2022Readers: Everyone
Keywords: MARL, Two-team Markov game, Competition, CTDE methods, SMAC
TL;DR: Identification of the best training scenario to train a team of agents to compete against multiple possible strategies of opposing teams.
Abstract: In this paper, we identify the best learning scenario to train a team of agents to compete against multiple possible strategies of opposing teams. We evaluate cooperative value-based methods in a mixed cooperative-competitive environment. We restrict ourselves to the case of a symmetric, partially observable, two-team Markov game. We selected three training methods based on the centralised training and decentralised execution (CTDE) paradigm: QMIX, MAVEN and QVMix. For each method, we considered three learning scenarios differentiated by the variety of team policies encountered during training. For our experiments, we modified the StarCraft Multi-Agent Challenge environment to create competitive environments where both teams could learn and compete simultaneously. Our results suggest that training against multiple evolving strategies achieves the best results when, for scoring their performances, teams are faced with several strategies.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/value-based-ctde-methods-in-symmetric-two/code)
0 Replies

Loading