Authors that are also TMLR Expert Reviewers: ~Taco_Cohen1
Abstract: Standard imitation learning can fail when the expert demonstrators have different sensory inputs than the imitating agent. This is because partial observability gives rise to hidden confounders in the causal graph. In previous work, to work around the confounding problem, policies have been trained using query access to the expert’s policy or inverse reinforcement learning (IRL). However, both approaches have drawbacks as the expert’s policy may not be available and IRL can be unstable in practice. Instead, we propose to train a variational inference model to infer the expert’s latent information and use it to train a latent-conditional policy. We prove that using this method, under strong assumptions, the identification of the correct imitation learning policy is theoretically possible from expert demonstrations alone. In practice, we focus on a setting with less strong assumptions where we use exploration data for learning the inference model. We show in theory and practice that this algorithm converges to the correct interventional policy, solves the confounding issue, and can under certain assumptions achieve an asymptotically optimal imitation performance.
Certifications: Expert Certification
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=HHcuaYmNuu
Changes Since Last Submission: In the previous submission, we accidentally used the `times` latex package changing the font. In this version, we removed the offending package to use the default font defined in the TMLR style files. To make the resulting paper fit in 12 pages, we moved pseudocode listings to the appendix, edited the method figure to be more space efficient, and slightly edited the phrasing.
Code: https://github.com/vuoristo/deconfounding
Assigned Action Editor: ~Thomy_Phan1
Submission Number: 2584
Loading