Provable Representation Learning for Imitation with Contrastive Fourier FeaturesDownload PDF

May 21, 2021 (edited Jan 21, 2022)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Representation Learning, Imitation Learning, Contrastive Learning, Reinforcement Learning
  • TL;DR: We develop a provably beneficial representation learning objective for imitation learning. Through approximating the MDP transitions via contrastive learning, our objective achieves drastic improvements on tabular and Atari game environments.
  • Abstract: In imitation learning, it is common to learn a behavior policy to match an unknown target policy via max-likelihood training on a collected set of target demonstrations. In this work, we consider using offline experience datasets -- potentially far from the target distribution -- to learn low-dimensional state representations that provably accelerate the sample-efficiency of downstream imitation learning. A central challenge in this setting is that the unknown target policy itself may not exhibit low-dimensional behavior, and so there is a potential for the representation learning objective to alias states in which the target policy acts differently. Circumventing this challenge, we derive a representation learning objective that provides an upper bound on the performance difference between the target policy and a low-dimensional policy trained with max-likelihood, and this bound is tight regardless of whether the target policy itself exhibits low-dimensional structure. Moving to the practicality of our method, we show that our objective can be implemented as contrastive learning, in which the transition dynamics are approximated by either an implicit energy-based model or, in some special cases, an implicit linear model with representations given by random Fourier features. Experiments on both tabular environments and high-dimensional Atari games provide quantitative evidence for the practical benefits of our proposed objective.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/google-research/google-research/tree/master/rl_repr
13 Replies

Loading