The wisdom of the crowd: reliable deep reinforcement learning through ensembles of Q-functions

Daniel Elliott, Charles Anderson

Sep 27, 2018 ICLR 2019 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Reinforcement learning agents learn by exploring the environment and then exploiting what they have learned. This frees the human trainers from having to know the preferred action or intrinsic value of each encountered state. The cost of this freedom is reinforcement learning is slower and more unstable than supervised learning. We explore the possibility that ensemble methods can remedy these shortcomings and do so by investigating a novel technique which harnesses the wisdom of the crowds by bagging Q-function approximator estimates. Our results show that this proposed approach improves all three tasks and reinforcement learning approaches attempted. We are able to demonstrate that this is a direct result of the increased stability of the action portion of the state-action-value function used by Q-learning to select actions and by policy gradient methods to train the policy.
  • Keywords: reinforcement learning, ensembles, deep learning, neural network
  • TL;DR: Examined how a simple ensemble approach can tackle the biggest challenges in Q-learning.
0 Replies

Loading