AdaStop: adaptive statistical testing for sound comparisons of Deep RL agents

TMLR Paper2061 Authors

17 Jan 2024 (modified: 06 May 2024)Decision pending for TMLREveryoneRevisionsBibTeX
Abstract: Recently, the scientific community has questioned the statistical reproducibility of many empirical results, especially in the field of machine learning. To solve this reproducibility crisis, we propose a theoretically sound methodology to compare the overall performance of multiple algorithms with stochastic returns. We exemplify our methodology in Deep Reinforcement Learning (Deep RL). Indeed, the performance of one execution of a Deep RL algorithm is a random variable. Therefore, several independent executions are needed to accurately evaluate its performance. When comparing algorithms with random performance, a major question concerns the number of executions to perform to ensure that the results of the comparison are theoretically sound. Researchers in Deep RL often use less than 5 independent executions to compare algorithms: we claim that this is not enough in general. Moreover, when comparing several algorithms at once, we have to use a multiple tests procedure to preserve low error guarantees. We introduce AdaStop, a new statistical test based on multiple group sequential tests. When comparing algorithms, AdaStop adapts the number of executions to stop as early as possible while ensuring that we have enough information to distinguish algorithms that perform better than the others in a statistically significant way. We prove theoretically that AdaStop has a low probability of making a (family-wise) error. Finally, we illustrate the effectiveness of AdaStop in multiple Deep RL use-cases, including toy examples and challenging Mujoco environments. AdaStop is the first statistical test fitted to this sort of comparisons: AdaStop is both a significant contribution to statistics, and a major contribution to computational studies performed in reinforcement learning and in other domains. To summarize our contribution, we introduce AdaStop, a formally grounded statistical tool to let anyone answer the practical question: ``Is my algorithm the new state-of-the-art?''
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: We updated the submission in accordance with the reviewers comments, we uploaded a diff between this and the initial version in the supplementary materials.
Assigned Action Editor: ~Martha_White1
Submission Number: 2061
Loading