Imitation Learning from Visual Data with Multiple IntentionsDownload PDF

15 Feb 2018 (modified: 17 Feb 2018)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Recent advances in learning from demonstrations (LfD) with deep neural networks have enabled learning complex robot skills that involve high dimensional perception such as raw image inputs. LfD algorithms generally assume learning from single task demonstrations. In practice, however, it is more efficient for a teacher to demonstrate a multitude of tasks without careful task set up, labeling, and engineering. Unfortunately in such cases, traditional imitation learning techniques fail to represent the multi-modal nature of the data, and often result in sub-optimal behavior. In this paper we present an LfD approach for learning multiple modes of behavior from visual data. Our approach is based on a stochastic deep neural network (SNN), which represents the underlying intention in the demonstration as a stochastic activation in the network. We present an efficient algorithm for training SNNs, and for learning with vision inputs, we also propose an architecture that associates the intention with a stochastic attention module. We demonstrate our method on real robot visual object reaching tasks, and show that it can reliably learn the multiple behavior modes in the demonstration data. Video results are available at https://vimeo.com/240212286/fd401241b9.
TL;DR: multi-modal imitation learning from unstructured demonstrations using stochastic neural network modeling intention.
Keywords: multi-modal imitation learning, deep learning, generative models, stochastic neural networks
5 Replies

Loading