Attentive One-Shot Meta-Imitation Learning From Visual DemonstrationDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 05 Nov 2023ICRA 2022Readers: Everyone
Abstract: The ability to apply a previously-learned skill (e.g., pushing) to a new task (context or object) is an important requirement for new-age robots. An attempt is made to solve this problem in this paper by proposing a deep meta-imitation learning framework comprising of an attentive-embedding net-work and a control network, capable of learning a new task in an end-to-end manner while requiring only one or a few visual demonstrations. The feature embeddings learnt by incorporating spatial attention is shown to provide higher embedding and control accuracy compared to other state-of-the-art methods such as TecNet [7] and MIL [4]. The interaction between the embedding and the control networks is improved by using multiplicative skip-connections and is shown to overcome the overfitting of the trained model. The superiority of the proposed model is established through rigorous experimentation using a publicly available dataset and a new dataset created using PyBullet [36]. Several ablation studies have been carried out to justify the design choices.
0 Replies

Loading