Deep Imitative Models for Flexible Inference, Planning, and ControlDownload PDF

Published: 24 Nov 2018, Last Modified: 05 May 2023NIPS 2018 Workshop MLITS SubmissionReaders: Everyone
Abstract: Imitation learning provides an appealing framework for autonomous control: in many tasks, demonstrations of preferred behavior can be readily obtained from human experts, removing the need for costly and potentially dangerous online data collection in the real world. A disadvantage of imitation learning is its limited flexibility to reach new goals safely at test time. In contrast, classical model-based reinforcement learning (MBRL) offers considerably more flexibility: a model learned from data can be reused at test-time to achieve a wide variety of goals, yet its dynamics model only captures what is possible, not what is preferred, resulting in potentially dangerous behavior outside the distribution of expert behavior. In this paper, we aim to combine these benefits to learn Imitative Models: probabilistic predictive models able to plan expert-like trajectories to achieve arbitrary goals. We find this method substantially outperforms both direct imitation and classical MBRL in a simulated driving task, and can be learned efficiently from a fixed set of expert demonstrations. We also show our model can flexibly incorporate user-supplied costs as test-time, can plan to sequences of goals, and can even perform well with imprecise goals, including goals on the wrong side of the road.
TL;DR: Hybrid Vision-Driven Imitation Learning and Model-Based Reinforcement Learning for Inference, Planning, and Control
Keywords: imitation learning, forecasting, computer vision
3 Replies

Loading