Imitation Learning from Pixel Observations for Continuous ControlDownload PDF

12 Oct 2021 (modified: 05 May 2023)Deep RL Workshop NeurIPS 2021Readers: Everyone
Keywords: imitation learning, optimal transport, GAIL, adversarial learning
TL;DR: We propose strong recipes for imitation learning from visual observations only based on adversarial learning and optimal transport.
Abstract: We study imitation learning from visual observations only for controlling dynamical systems with continuous states and actions. This setting is attractive due to the large amount of video data available from which agents could learn from. However, it is challenging due to $i)$ not observing the actions and $ii)$ the high-dimensional visual space. In this setting, we explore recipes for imitation learning based on adversarial learning and optimal transport. These recipes enable us to scale these methods to attain expert-level performance on visual continuous control tasks in the DeepMind control suite. We investigate the tradeoffs of these approaches and present a comprehensive evaluation of the key design choices. To encourage reproducible research in this area, we provide an easy-to-use implementation for benchmarking visual imitation learning, including our methods and expert demonstrations.
Supplementary Material: zip
0 Replies

Loading