Closing the gap between SVRG and TD-SVRG with Gradient Splitting

Published: 17 Jul 2025, Last Modified: 06 Sept 2025EWRL 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Theory of Reinforcement Learning, TD Convergence Analysis, Variance Reduction Techniques
TL;DR: We use gradient splitting to improve convergence of TD with variance reduction, matching the convergence rate of SVRG in the convex setting.
Abstract: Temporal difference (TD) learning is a policy evaluation in reinforcement learning whose performance can be enhanced by variance reduction methods. Recently, multiple works have sought to fuse TD learning with Stochastic Variance Reduced Gradient (SVRG) method to achieve a geometric rate of convergence. However, the resulting convergence rate is significantly weaker than what is achieved by SVRG in the setting of convex optimization. In this work we utilize a recent interpretation of TD-learning as the splitting of the gradient of an appropriately chosen function, thus simplifying the algorithm and fusing TD with SVRG. Our main result is a geometric convergence bound with predetermined learning rate of $1/8$, which is identical to the convergence bound available for SVRG in the convex setting. Our theoretical findings are supported by a set of experiments.
Confirmation: I understand that authors of each paper submitted to EWRL may be asked to review 2-3 other submissions to EWRL.
Serve As Reviewer: ~Arsenii_Mustafin1
Track: Fast Track: published work
Publication Link: https://openreview.net/forum?id=dixU4fozPQ
Submission Number: 75
Loading