A Connection between One-Step Regularization and Critic Regularization in Reinforcement LearningDownload PDF

08 Oct 2022 (modified: 03 Nov 2024)Deep RL Workshop 2022Readers: Everyone
Keywords: reinforcement learning, regularization, one-step RL, theory
TL;DR: Critic regularization and one-step RL can produce the same policy, under certain assumptions.
Abstract: As with any machine learning problem with limited data, effective offline RL algorithms require careful regularization to avoid overfitting. One-step methods perform regularization by doing just a single step of policy improvement, while critic regularization methods do many steps of policy improvement with a regularized objective. These methods appear distinct. One-step methods, such as advantage-weighted regression and conditional behavioral cloning, are simple and stable. Critic regularization is more challenging to implement correctly and typically requires more compute, but has appealing lower-bound guarantees. Empirically, prior work alternates between claiming better results with one-step RL and critic regularization. In this paper, we draw a close connection between these methods: applying a multi-step critic regularization method with a large regularization coefficient yields the same policy as one-step RL. Practical implementations violate our assumptions and critic regularization is typically applied with small regularization coefficients. Nonetheless, our experiments nevertheless show that our analysis makes accurate, testable predictions about practical offline RL methods (CQL and one-step RL) with commonly-used hyperparameters.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/a-connection-between-one-step-regularization/code)
0 Replies

Loading