Abstract: This work presents a sample efficient and effective valuebased method, named SMIX(λ), for reinforcement learning
in multi-agent environments (MARL) within the paradigm of
centralized training with decentralized execution (CTDE), in
which learning a stable and generalizable centralized value
function (CVF) is crucial. To achieve this, our method carefully combines different elements, including 1) removing the
unrealistic centralized greedy assumption during the learning
phase, 2) using the λ-return to balance the trade-off between
bias and variance and to deal with the environment’s nonMarkovian property, and 3) adopting an experience-replay
style off-policy training. Interestingly, it is revealed that
there exists inherent connection between SMIX(λ) and previous off-policy Q(λ) approach for single-agent learning. Experiments on the StarCraft Multi-Agent Challenge (SMAC)
benchmark show that the proposed SMIX(λ) algorithm outperforms several state-of-the-art MARL methods by a large
margin, and that it can be used as a general tool to improve
the overall performance of a CTDE-type method by enhancing the evaluation quality of its CVF. We open-source our
code at: https://github.com/chaovven/SMIX.
0 Replies
Loading