"Good Robot! Now Watch This!": Repurposing Reinforcement Learning for Task-to-Task TransferDownload PDF

Published: 13 Sept 2021, Last Modified: 05 May 2023CoRL2021 PosterReaders: Everyone
Keywords: Deep Learning in Grasping and Manipulation, Computer Vision for Robotic Applications, Imitation Learning, Reinforcement Learning, Learning from Demonstration
Abstract: Modern Reinforcement Learning (RL) algorithms are not sample efficient to train on multi-step tasks in complex domains, impeding their wider deployment in the real world. We address this problem by leveraging the insight that RL models trained to complete one set of tasks can be repurposed to complete related tasks when given just a handful of demonstrations. Based upon this insight, we propose See-SPOT-Run (SSR), a new computational approach to robot learning that enables a robot to complete a variety of real robot tasks in novel problem domains without task-specific training. SSR uses pretrained RL models to create vectors that represent model, task, and action relevance in demonstration and test scenes. SSR then compares these vectors via our Cycle Consistency Distance (CCD) metric to determine the next action to take. SSR completes 58% more task steps and 20% more trials than a baseline few-shot learning method that requires task-specific training. SSR also achieves a four order of magnitude improvement in compute efficiency and a 20% to three order of magnitude improvement in sample efficiency compared to the baseline and to training RL models from scratch. To our knowledge, we are the first to address multi-step tasks from demonstration on a real robot without task-specific training, where both the visual input and action space output are high dimensional. Code will be made available.
Supplementary Material: zip
Poster: png
6 Replies

Loading