The Pitfalls of Regularization in Off-Policy TD LearningDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 11 Jan 2023, 15:58NeurIPS 2022 AcceptReaders: Everyone
Keywords: regularization, ridge, td, rl, reinforcement learning, theory
TL;DR: Regularization works counterintuitively in temporal difference learning; it may not work at all, and can even increase error asymptotically.
Abstract: Temporal Difference (TD) learning is ubiquitous in reinforcement learning, where it is often combined with off-policy sampling and function approximation. Unfortunately learning with this combination (known as the deadly triad), exhibits instability and unbounded error. To account for this, modern Reinforcement Learning methods often implicitly (or sometimes explicitly) assume that regularization is sufficient to mitigate the problem in practice; indeed, the standard deadly triad examples from the literature can be ``fixed'' via proper regularization. In this paper, we introduce a series of new counterexamples to show that the instability and unbounded error of TD methods is not solved by regularization. We demonstrate that, in the off-policy setting with linear function approximation, TD methods can fail to learn a non-trivial value function under any amount of regularization; we further show that regularization can induce divergence under common conditions; and we show that one of the most promising methods to mitigate this divergence (Emphatic TD algorithms) may also diverge under regularization. We further demonstrate such divergence when using neural networks as function approximators. Thus, we argue that the role of regularization in TD methods needs to be reconsidered, given that it is insufficient to prevent divergence and may itself introduce instability. There needs to be much more care in the practical and theoretical application of regularization to Reinforcement Learning methods.
Supplementary Material: zip
14 Replies

Loading