Offline Reinforcement Learning with Pseudometric LearningDownload PDF

Anonymous

Published: 15 Jun 2022, Last Modified: 22 Oct 2023SSL-RL 2021 PosterReaders: Everyone
Keywords: Offline Reinforcement Learning, Pseudometric, Bisimulation Metrics
TL;DR: A new Offline RL method which learns a pseudometric on the offline dataset and uses it to learn an optimal policy that remains close to the support of the dataset.
Abstract: Offline Reinforcement Learning methods seek to learn a policy from logged transitions of an environment, without any interaction. In the presence of function approximation, and under the assumption of limited coverage of the state-action space of the environment, it is necessary to enforce the policy to visit state-action pairs close to the support of logged transitions. In this work, we propose an iterative procedure to learn a pseudometric from logged transitions, and use it to define this notion of closeness. We show its convergence and extend it to the function approximation setting. We then use this pseudometric to define a new lookup based bonus in an actor-critic algorithm: PLOFF. This bonus encourages the actor to stay close, in terms of the defined pseudometric, to the support of logged transitions. Finally, we evaluate the method on hand manipulation and locomotion tasks.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2103.01948/code)
0 Replies

Loading