One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-LearningDownload PDF

12 Feb 2018 (modified: 12 Feb 2018)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: Humans and animals are capable of learning a new behavior by observing others perform the skill just once. We consider the problem of allowing a robot to do the same -- learning from a raw video pixels of a human, even when there is substantial domain shift in the perspective, environment, and embodiment between the robot and the observed human. Prior approaches to this problem have hand-specified how human and robot actions correspond and often relied on explicit human pose detection systems. In this work, we present an approach for one-shot learning from a video of a human by using human and robot demonstration data from a variety of previous tasks to build up prior knowledge through meta-learning. Then, combining this prior knowledge and only a single video demonstration from a human, the robot can perform the task that the human demonstrated. We show experiments on a PR2 arm, demonstrating that after meta-learning, the robot can learn to place, push, and pick-and-place new objects using just one video of a human performing the manipulation.
TL;DR: Robot can learn a new behavior from a raw video pixels of a human, even when there is substantial domain shift between the robot and the observed human.
Keywords: robot learning, imitation learning, few-shot learning, meta-learning, robot vision
3 Replies

Loading