Abstract: Eligibility traces in reinforcement learning are used as a bias-variance trade-off and can often speed up training time by propagating knowledge back over time-steps in a single update. We investigate the use of eligibility traces in combination with recurrent networks in the Atari domain. We illustrate the benefits of both recurrent nets and eligibility traces in some Atari games, and highlight also the importance of the optimization used in the training.
TL;DR: Analyze the effects of using eligibility traces different optimizations in Deep Recurrent Q-Networks
Conflicts: cs.mgill.ca, umontreal.ca
Keywords: Reinforcement Learning, Deep learning
10 Replies
Loading