Deconfounding Imitation Learning with Variational Inference

TMLR Paper2584 Authors

25 Apr 2024 (modified: 26 Jun 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Standard imitation learning can fail when the expert demonstrators have different sensory inputs than the imitating agent. This is because partial observability gives rise to hidden confounders in the causal graph. In previous work, to work around the confounding problem, policies have been trained using query access to the expert’s policy or inverse reinforcement learning (IRL). However, both approaches have drawbacks as the expert’s policy may not be available and IRL can be unstable in practice. Instead, we propose to train a variational inference model to infer the expert’s latent information and use it to train a latent-conditional policy. We prove that using this method, under strong assumptions, the identification of the correct imitation learning policy is theoretically possible from expert demonstrations alone. In practice, we focus on a setting with less strong assumptions where we use exploration data for learning the inference model. We show in theory and practice that this algorithm converges to the correct interventional policy, solves the confounding issue, and can under certain assumptions achieve an asymptotically optimal imitation performance.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=HHcuaYmNuu
Changes Since Last Submission: In the previous submission, we accidentally used the `times` latex package changing the font. In this version, we removed the offending package to use the default font defined in the TMLR style files. To make the resulting paper fit in 12 pages, we moved pseudocode listings to the appendix, edited the method figure to be more space efficient, and slightly edited the phrasing.
Assigned Action Editor: ~Thomy_Phan1
Submission Number: 2584
Loading