UCB EXPLORATION VIA Q-ENSEMBLESDownload PDF

15 Feb 2018 (modified: 10 Feb 2022)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: We show how an ensemble of $Q^*$-functions can be leveraged for more effective exploration in deep reinforcement learning. We build on well established algorithms from the bandit setting, and adapt them to the $Q$-learning setting. We propose an exploration strategy based on upper-confidence bounds (UCB). Our experiments show significant gains on the Atari benchmark.
TL;DR: Adapting UCB exploration to ensemble Q-learning improves over prior methods such as Double DQN, A3C+ on Atari benchmark
Keywords: Reinforcement learning, Q-learning, ensemble method, upper confidence bound
Data: [Arcade Learning Environment](https://paperswithcode.com/dataset/arcade-learning-environment)
16 Replies

Loading