Interactive and Hybrid Imitation Learning: Provably Beating Behavior Cloning

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY-NC 4.0
Keywords: imitation learning, online learning, reinforcement learning
TL;DR: Measure cost per state, not per trajectory: Stagger (State-wise-DAgger) —and our hybrid Warm‑Stagger—beats Behavior Cloning, giving the first formal proof that state‑wise interactive IL outperforms Behavior Cloning.
Abstract: Imitation learning (IL) is a paradigm for learning sequential decision-making policies from experts, leveraging offline demonstrations, interactive annotations, or both. Recent advances show that when annotation cost is tallied per trajectory, Behavior Cloning (BC)—which relies solely on offline demonstrations—cannot be improved in general, leaving limited conditions for interactive methods such as DAgger to help. We revisit this conclusion and prove that when the annotation cost is measured per state, algorithms using interactive annotations can provably outperform BC. Specifically: (1) we show that Stagger, a one‑sample‑per‑round variant of DAgger, provably beats BC under low-recovery-cost settings; (2) we initiate the study of hybrid IL where the agent learns from offline demonstrations and interactive annotations. We propose Warm-Stagger whose learning guarantee is not much worse than using either data source alone. Furthermore, motivated by compounding error and cold‑start problem in imitation learning practice, we give an MDP example in which Warm-Stagger has significant better annotation cost; (3) experiments on MuJoCo continuous‑control tasks confirm that, with modest cost ratio between interactive and offline annotations, interactive and hybrid approaches consistently outperform BC. To the best of our knowledge, our work is the first to highlight the benefit of state‑wise interactive annotation and hybrid feedback in imitation learning.
Supplementary Material: zip
Primary Area: Reinforcement learning (e.g., decision and control, planning, hierarchical RL, robotics)
Submission Number: 26814
Loading