Exploring with Sticky Mittens: Reinforcement Learning with Expert Interventions via Option TemplatesDownload PDF

Published: 10 Sept 2022, Last Modified: 05 May 2023CoRL 2022 PosterReaders: Everyone
Keywords: Sample-Efficient Reinforcement Learning, Expert Intervention, Options, Planning with Primitives
TL;DR: Adding expert intervention in the training phase can achieve orders of magnitude reduction in sample complexity, which is typically very high for most realistic RL algorithms.
Abstract: Long horizon robot learning tasks with sparse rewards pose a significant challenge for current reinforcement learning algorithms. A key feature enabling humans to learn challenging control tasks is that they often receive expert intervention that enables them to understand the high-level structure of the task before mastering low-level control actions. We propose a framework for leveraging expert intervention to solve long-horizon reinforcement learning tasks. We consider \emph{option templates}, which are specifications encoding a potential option that can be trained using reinforcement learning. We formulate expert intervention as allowing the agent to execute option templates before learning an implementation. This enables them to use an option, before committing costly resources to learning it. We evaluate our approach on three challenging reinforcement learning problems, showing that it outperforms state-of-the-art approaches by two orders of magnitude. Videos of trained agents and our code can be found at: https://sites.google.com/view/stickymittens
Student First Author: no
Supplementary Material: zip
Website: https://sites.google.com/view/stickymittens
Code: https://github.com/sticky-mittens
11 Replies