Optimal Transport for Offline Imitation LearningDownload PDF

05 Oct 2022 (modified: 17 Nov 2024)Offline RL Workshop NeurIPS 2022Readers: Everyone
Keywords: offline reinforcement learning, imitation learning, optimal transport
TL;DR: We present a reward labeling algorithm for offline reinforcement/imitation learning
Abstract: With the advent of large datasets, offline reinforcement learning is a promising framework for learning good decision-making policies without the need to interact with the real environment. However, offline RL requires the dataset to be reward-annotated, which presents practical challenges when reward engineering is difficult or when obtaining reward annotations is labor-intensive. In this paper, we introduce Optimal Transport Reward labeling (OTR), an algorithm that can assign rewards to offline trajectories, with a few high-quality demonstrations. OTR's key idea is to use optimal transport to compute an optimal alignment between an unlabeled trajectory in the dataset and an expert demonstration to obtain a similarity measure that can be interpreted as a reward, which can then be used by an offline RL algorithm to learn the policy. OTR is easy to implement and computationally efficient. On D4RL benchmarks, we show that OTR with a single demonstration can consistently match the performance of offline RL with ground-truth rewards.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/optimal-transport-for-offline-imitation/code)
2 Replies

Loading