Provable Benefits of Representational Transfer in Reinforcement LearningDownload PDF

05 Oct 2022 (modified: 17 Nov 2024)Offline RL Workshop NeurIPS 2022Readers: Everyone
TL;DR: We design the first provably efficient algorithm for offline to online representation transfer in reinforcement learning
Abstract: We study the problem of representational transfer in RL, where an agent first pretrains offline in a number of source tasks to discover a shared representation, which is subsequently used to learn a good policy online in a target task. We propose a new notion of task relatedness between source and target tasks and develop a novel approach for representational transfer under this assumption. Concretely, we show that given generative access to a set of source tasks, we can discover a representation, using which subsequent linear RL techniques quickly converge to a near-optimal policy, with only online access to the target task. The sample complexity is close to knowing the ground truth features in the target task and comparable to prior representation learning results in the source tasks. We complement our positive results with lower bounds without generative access and validate our findings with empirical evaluation on rich observation MDPs that requires deep exploration.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/provable-benefits-of-representational/code)
2 Replies

Loading