Offline Reinforcement Learning with Closed-Form Policy Improvement OperatorsDownload PDF

05 Oct 2022 (modified: 17 Nov 2024)Offline RL Workshop NeurIPS 2022Readers: Everyone
Keywords: Offline Reinforcement Learning
TL;DR: We proposed a learning-free policy improvement operator and modeled the behavior policies as a Gaussian Mixture.
Abstract: Behavior constrained policy optimization has been demonstrated to be a successful paradigm for tackling Offline Reinforcement Learning. By exploiting historical transitions, a policy is trained to maximize a learned value function while constrained by the behavior policy to avoid a significant distributional shift. In this paper, we propose our closed-form policy improvement (CFPI) operators. We make a novel observation that the behavior constraint naturally motivates the use of first-order Taylor approximation, leading to a linear approximation of the policy objective. Additionally, as practical datasets are usually collected by heterogeneous policies, we model the behavior policies as a Gaussian Mixture and overcome the induced optimization difficulties by leveraging the LogSumExp's lower bound and Jensen's Inequality, giving rise to a CFPI operator. We instantiate offline RL algorithms with our novel operators and empirically demonstrate their effectiveness over state-of-the-art algorithms on the standard D4RL benchmark.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/offline-reinforcement-learning-with-closed/code)
2 Replies

Loading