Keywords: reproducibility, evaluation pipeline, reinforcement learning, replication of results
TL;DR: We study the importance of reproducibility in evaluation in RL, and propose an evaluation pipeline that could be standardized, as a step towards robust and reproducible research in RL.
Abstract: Reinforcement learning (RL) has recently achieved tremendous success in solving complex tasks. Careful considerations are made towards reproducible research in machine learning. Reproducibility in RL often becomes more difficult, due to the lack of standard evaluation method and detailed methodology for algorithms and comparisons with existing work. In this work, we highlight key differences in evaluation in RL compared to supervised learning and discuss specific issues that are often non-intuitive for newcomers. We study the importance of reproducibility in evaluation in RL and propose an evaluation pipeline that can be decoupled from the algorithm code. We hope such an evaluation pipeline can be standardized, as a step towards robust and reproducible research in RL.