Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality TighteningDownload PDF

Published: 06 Feb 2017, Last Modified: 23 Mar 2025ICLR 2017 PosterReaders: Everyone
Abstract: We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel technique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improvements in both training time and accuracy.
TL;DR: We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation.
Keywords: Reinforcement Learning, Optimization, Games
Conflicts: toronto.edu, illinois.edu
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/learning-to-play-in-a-day-faster-deep/code)
28 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview