A Theoretical Analysis of Deep Q-LearningDownload PDF

08 Jun 2020 (modified: 05 May 2023)L4DC 2020Readers: Everyone
Abstract: Despite the great empirical success of deep reinforcement learning, its theoretical foundation is less well understood. In this work, we make the first attempt to theoretically understand the deep Q-network (DQN) algorithm (Mnih et al., 2015) from both algorithmic and statistical perspectives. In specific, we focus on the fitted Q iteration (FQI) algorithm with deep neural networks, which is a slight simplification of DQN that captures the tricks of experience replay and target network used in DQN. Under mild assumptions, we establish the algorithmic and statistical rates of convergence for the action-value functions of the iterative policy sequence obtained by FQI. In particular, the statistical error characterizes the bias and variance that arise from approximating the action-value function using deep neural network, while the algorithmic error converges to zero at a geometric rate. As a byproduct, our analysis provides justifications for the techniques of experience replay and target network, which are crucial to the empirical success of DQN. Furthermore, as a simple extension of DQN, we propose the Minimax-DQN algorithm for zero-sum Markov game with two players, which is deferred to the appendix due to space limitations.
0 Replies

Loading