Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening

Frank S.He, Yang Liu, Alexander G. Schwing, Jian Peng

Nov 04, 2016 (modified: Mar 07, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel technique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improvements in both training time and accuracy.
  • TL;DR: We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation.
  • Keywords: Reinforcement Learning, Optimization, Games
  • Conflicts: toronto.edu, illinois.edu
  • Authorids: frankheshibi@gmail.com, liu301@illinois.edu, aschwing@illinois.edu, jianpeng@illinois.edu

Loading