Demonstration-Guided Reinforcement Learning with Learned SkillsDownload PDF

Mar 09, 2021 (edited Apr 18, 2021)ICLR 2021 Workshop SSL-RL Blind SubmissionReaders: Everyone
  • Keywords: Reinforcement Learning, Imitation Learning, Transfer Learning
  • TL;DR: We propose an algorithm that extracts learned skills from large, task-agnostic datasets and uses them for efficient demonstration-guided reinforcement learning on long-horizon tasks.
  • Abstract: Demonstration-guided reinforcement learning (RL) is a promising approach for learning complex behaviors by leveraging both reward feedback and a set of target task demonstrations. Prior approaches for demonstration-guided RL treat every new task as an independent learning problem and attempt to follow the provided demonstrations step-by-step, akin to a human trying to imitate a completely unseen behavior by following the demonstrator's exact muscle movements. Naturally, such learning will be slow, but often new behaviors are not completely unseen: they share subtasks with behaviors we have previously learned. In this work, we aim to exploit this shared subtask structure to increase the efficiency of demonstration-guided RL. We first learn a set of reusable skills from large offline datasets of prior experience collected across many tasks. We then propose an algorithm for demonstration-guided RL that efficiently leverages the provided demonstrations by following the demonstrated skills instead of the primitive actions, resulting in substantial performance improvements over prior demonstration-guided RL approaches. We validate the effectiveness of our approach on long-horizon maze navigation and complex robot manipulation tasks.
0 Replies