Effective Offline RL Needs Going Beyond Pessimism: Representations and Distributional ShiftDownload PDF

28 May 2022 (modified: 05 May 2023)DARL 2022Readers: Everyone
Keywords: Offline reinforcement learning, representation leanring
TL;DR: Effective Offline RL needs representational regularization
Abstract: Standard off-policy reinforcement learning (RL) methods based on temporal difference (TD) learning generally fail to learn good policies when applied to static offline datasets. Conventionally, this is attributed to distribution shift, where the Bellman backup queries high-value out-of-distribution (OOD) actions for the next time step, which then leads to systematic overestimation. However, this explanation is incomplete, as conservative offline RL methods that directly address overestimation still suffer from stability problems in practice. This suggests that although OOD actions may account for part of the challenge, the difficulties with TD learning in the offline setting are also deeply connected to other aspects such as the quality of representations of learned function approximators. In this work, we demonstrate that merely imposing pessimism is not sufficient for good performance, and demonstrate empirically that regularizing representations actually accounts for a large part of the improvement observed in modern offline RL methods. Building on this insight, we identify concrete metrics that enable effective diagnosis of the quality of the learned representation, and are able to adequately predict performance of the underlying method. Finally, we show that a simple approach for handling representations, without any changing any other aspect of conservative offline RL algorithms can lead to better performance in several offline RL problems.
0 Replies

Loading