Abstract: Despite significant advances in the field of deep Reinforcement Learning (RL), today's algorithms still fail to learn human-level policies consistently over a set of diverse tasks such as Atari 2600 games. We identify three key challenges that any algorithm needs to master in order to perform well on all games: processing diverse reward distributions, reasoning over long time horizons, and exploring efficiently. In this paper, we propose an algorithm that addresses each of these challenges and is able to learn human-level policies on nearly all Atari games. A new transformed Bellman operator allows our algorithm to process rewards of varying densities and scales; an auxiliary temporal consistency loss allows us to train stably using a discount factor of 0.999 (instead of 0.99) extending the effective planning horizon by an order of magnitude; and we ease the exploration problem by using human demonstrations that guide the agent towards rewarding states. When tested on a set of 42 Atari games, our algorithm exceeds the performance of an average human on 40 games using a common set of hyper parameters.
Keywords: Reinforcement Learning, Atari, RL, Demonstrations
TL;DR: Ape-X DQfD = Distributed (many actors + one learner + prioritized replay) DQN with demonstrations optimizing the unclipped 0.999-discounted return on Atari.
Data: [Arcade Learning Environment](https://paperswithcode.com/dataset/arcade-learning-environment)
8 Replies
Loading