Fail-Safe Adversarial Generative Imitation Learning

Published: 07 Nov 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: For flexible yet safe imitation learning (IL), we propose theory and a modular method, with a safety layer that enables a closed-form probability density/gradient of the safe generative continuous policy, end-to-end generative adversarial training, and worst-case safety guarantees. The safety layer maps all actions into a set of safe actions, and uses the change-of-variables formula plus additivity of measures for the density. The set of safe actions is inferred by first checking safety of a finite sample of actions via adversarial reachability analysis of fallback maneuvers, and then concluding on the safety of these actions' neighborhoods using, e.g., Lipschitz continuity. We provide theoretical analysis showing the robustness advantage of using the safety layer already during training (imitation error linear in the horizon) compared to only using it at test time (up to quadratic error). In an experiment on real-world driver interaction data, we empirically demonstrate tractability, safety and imitation performance of our approach.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Polished writing and illustrations. Improved experimental imitation performance (did not affect ranking between methods, nor safety).
Code: https://github.com/boschresearch/fagil
Assigned Action Editor: ~Matthieu_Geist1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 353
Loading