A Non-asymptotic Analysis of Non-parametric Temporal-Difference LearningDownload PDF

Published: 31 Oct 2022, Last Modified: 10 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: reinforcement learning, temporal-difference learning, non-parametric, kernel methods, convergence, policy evaluation
Abstract: Temporal-difference learning is a popular algorithm for policy evaluation. In this paper, we study the convergence of the regularized non-parametric TD(0) algorithm, in both the independent and Markovian observation settings. In particular, when TD is performed in a universal reproducing kernel Hilbert space (RKHS), we prove convergence of the averaged iterates to the optimal value function, even when it does not belong to the RKHS. We provide explicit convergence rates that depend on a source condition relating the regularity of the optimal value function to the RKHS. We illustrate this convergence numerically on a simple continuous-state Markov reward process.
TL;DR: Temporal-difference learning in a universal RKHS converges to the value function of the evaluated policy.
Supplementary Material: zip
12 Replies

Loading