Finite-Time Analysis of Adaptive Temporal Difference Learning with Deep Neural NetworksDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 03 Jan 2023, 05:53NeurIPS 2022 AcceptReaders: Everyone
Keywords: Temporal Difference Learning, Adaptivity, DNN Approximation, MDP, Finite-Time Analysis
TL;DR: We investigate the convergence of adaptive TD learning with DNN approximation and explain why the adaptive scheme is possible to accelerate TD in the DNN settings.
Abstract: Temporal difference (TD) learning with function approximations (linear functions or neural networks) has achieved remarkable empirical success, giving impetus to the development of finite-time analysis. As an accelerated version of TD, the adaptive TD has been proposed and proved to enjoy finite-time convergence under the linear function approximation. Existing numerical results have demonstrated the superiority of adaptive algorithms to vanilla ones. Nevertheless, the performance guarantee of adaptive TD with neural network approximation remains widely unknown. This paper establishes the finite-time analysis for the adaptive TD with multi-layer ReLU network approximation whose samples are generated from a Markov decision process. Our established theory shows that if the width of the deep neural network is large enough, the adaptive TD using neural network approximation can find the (optimal) value function with high probabilities under the same iteration complexity as TD in general cases. Furthermore, we show that the adaptive TD using neural network approximation, with the same width and searching area, can achieve theoretical acceleration when the stochastic semi-gradients decay fast.
Supplementary Material: pdf
12 Replies