Extraneousness-Aware Imitation LearningDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: visual imitation learning, imitation learning from noisy video
Abstract: Visual imitation learning is an effective approach for intelligent agents to obtain control policies from visual demonstration sequences. However, standard visual imitation learning assumes expert demonstration that only contains the task-relevant frames. While previous works propose to learn from \textit{noisy} demonstration, it still remains challenging when there are locally consistent yet task irrelevant subsequences in the demonstration. We term this kind of imitation learning ``imitation-learning-with-extraneousness'' and introduce Extraneousness-Aware Imitation Learning (EIL), a self-supervised approach that learns visuomotor policies from third-person demonstrations where extraneous subsequences exist. EIL learns action-conditioned self-supervised frame embeddings and aligns task-relevant frames across videos while excluding the extraneous parts. Our method allows agents to learn from extraneousness-rich demonstrations by intelligently ignoring irrelevant components. Experimental results show that EIL significantly outperforms strong baselines and approaches the level of training from the perfect demonstration on various simulated continuous control tasks and a ``learning-from-slides'' task. The project page can be found here: https://sites.google.com/view/iclr2022eil/home.
One-sentence Summary: We propose Extraneousness-Aware Imitation Learning (EIL) along with a new type of imitation learning that can learn from noisy yet temporally consistent demonstrations and outperforms baselines on various continuous and discrete control tasks.
10 Replies

Loading