How much can change in a year? Revisiting Evaluation in Multi-Agent Reinforcement Learning

Published: 08 Feb 2024, Last Modified: 08 Feb 2024XAI4DRLEveryoneRevisionsBibTeXCC BY 4.0
Confirmation: I accept the constraint that If the paper will be accepted at least one of the authors will attend the workshop and present the work
Keywords: Multi-Agent Reinforcement Learning, Survey, Evaluation
TL;DR: An analysis of historic and recent trends surrounding algorithmic evaluation in Multi-Agent Reinforcement Learning (MARL)
Abstract: Establishing sound experimental standards and rigour is important in any growing field of research. Deep Multi-Agent Reinforcement Learning (MARL) is one such nascent field. Although exciting progress has been made, MARL has recently come under scrutiny for replicability issues and a lack of standardised evaluation methodology, specifically in the cooperative setting. Although protocols have been proposed to help alleviate the issue, it remains important to actively monitor the health of the field. In this work, we extend the database of evaluation methodology previously published by (Gorsane et al., 2022) containing meta-data on MARL publications from top-rated conferences and compare the findings extracted from this updated database to the trends identified in their work. Our analysis shows that many of the worrying trends in performance reporting remain. This includes the omission of uncertainty quantification, not reporting all relevant evaluation details and a narrowing of algorithmic development classes. Promisingly, we do observe a trend towards more difficult scenarios in SMAC-v1, which if continued into SMAC-v2 will encourage novel algorithmic development. Our data indicate that replicability needs to be approached more proactively by the MARL community to ensure trust in the field as we move towards exciting new frontiers.
Submission Type: Short Paper
Submission Number: 4
Loading