Keywords: reinforcement learning, offline reinforcement learning, unsupervised reinforcement learning, zero-shot reinforcement learning
TL;DR: We propose methods for improving the performance of zero-shot RL methods when trained on low quality offline datasets.
Abstract: Zero-shot reinforcement learning (RL) promises to provide agents that can perform _any_ task in an environment after an offline, reward-free pre-training phase. Methods leveraging successor measures and successor features have shown strong performance in this setting, but require access to large heterogenous datasets for pre-training which cannot be expected for most real problems. Here, we explore how the performance of zero-shot RL methods degrades when trained on small homogeneous datasets, and propose fixes inspired by _conservatism_, a well-established feature of performant single-task offline RL algorithms. We evaluate our proposals across various datasets, domains and tasks, and show that conservative zero-shot RL algorithms outperform their non-conservative counterparts on low quality datasets, and perform no worse on high quality datasets. Somewhat surprisingly, our proposals also outperform baselines that get to see the task during training. Our code is available via the project page https://enjeeneer.io/projects/zero-shot-rl/.
Primary Area: Reinforcement learning
Submission Number: 6496
Loading