Provably Efficient Off-Policy Adversarial Imitation Learning with Convergence Guarantees

TMLR Paper6669 Authors

26 Nov 2025 (modified: 09 Dec 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Adversarial Imitation Learning (AIL) faces challenges with sample inefficiency because of its reliance on sufficient on-policy data to evaluate the performance of the current policy during reward function updates. In this work, we study the convergence properties and sample complexity of off-policy AIL algorithms. We show that, even in the absence of importance sampling correction, reusing samples generated by the $o(\sqrt{K})$ most recent policies, where $K$ is the number of iterations of policy updates and reward updates, does not undermine the convergence guarantees of this class of algorithms. Furthermore, our results indicate that the distribution shift error induced by off-policy updates is dominated by the benefits of having more data available. This result provides theoretical support for the sample efficiency of off-policy AIL algorithms that has been observed in practice.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Bo_Dai1
Submission Number: 6669
Loading