Meta-Imitation Learning by Watching Video DemonstrationsDownload PDF

29 Sept 2021, 00:34 (modified: 08 May 2022, 16:39)ICLR 2022 PosterReaders: Everyone
Keywords: Meta-imitation Learning, One-shot Learning, Learning by Watching, Generative Adversarial Networks
Abstract: Meta-Imitation Learning is a promising technique for the robot to learn a new task from observing one or a few human demonstrations. However, it usually requires a significant number of demonstrations both from humans and robots during the meta-training phase, which is a laborious and hard work for data collection, especially in recording the actions and specifying the correspondence between human and robot. In this work, we present an approach of meta-imitation learning by watching video demonstrations from humans. In comparison to prior works, our approach is able to translate human videos into practical robot demonstrations and train the meta-policy with adaptive loss based on the quality of the translated data. Our approach relies only on human videos and does not require robot demonstration, which facilitates data collection and is more in line with human imitation behavior. Experiments reveal that our method achieves the comparable performance to the baseline on fast learning a set of vision-based tasks through watching a single video demonstration.
One-sentence Summary: We present an approach of meta-imitation learning by watching video demonstrations from humans.
Supplementary Material: zip
14 Replies

Loading