Reproducibility of Benchmarked Deep Reinforcement Learning Tasks for Continuous Control

Riashat Islam, Peter Henderson, Maziar Gomrokchi, Doina Precup

Jun 18, 2017 (modified: Jul 31, 2017) ICML 2017 RML Submission readers: everyone
  • Abstract: Policy gradient methods in reinforcement learning have become increasingly prevalent for state-of-the-art performance in continuous control tasks. Novel methods typically benchmark against a few key algorithms such as deep deterministic policy gradients and trust region policy optimization. As such, it is important to present and use consistent baselines experiments. However, this can be difficult due to general variance in the algorithms, hyper-parameter tuning, and environment stochasticity. We investigate and discuss: the significance of hyper-parameters in policy gradients for continuous control, general variance in the algorithms, and reproducibility of reported results. We provide guidelines on reporting novel results as comparisons against baseline methods such that future researchers can make informed decisions when investigating novel methods.
  • TL;DR: On the difficulty of reproducing continuous control experiments with policy gradient algorithms
  • Keywords: Deep Reinforcement Learning, Policy Gradients, Continuous Control, Gym MuJoCo, Reproducibility

Loading