Keywords: Offline Imitation Learning; Offline Reinforcement Learning; Data Augmentation
Abstract: Data augmentation plays a pivotal role in offline imitation learning (IL) by alleviating covariate shift, yet existing methods remain constrained. Single-step techniques frequently violate underlying system dynamics, whereas trajectory-level approaches are plagued by compounding errors or scalability limitations. Even recent Koopman-based methods typically function at the single-step level, encountering computational bottlenecks due to action-equivariance requirements and vulnerability to approximation errors. To overcome these challenges, we introduce Koopman-Assisted Trajectory Synthesis (KATS), a novel framework for generating complete, multi-step trajectories. By operating at the trajectory level, KATS effectively mitigates compounding errors. It leverages a state-equivariant assumption to ensure computational efficiency and scalability, while incorporating a refined generator matrix to bolster robustness against Koopman approximation errors. This approach enables a more direct and efficacious mechanism for distribution matching in offline IL. Extensive experiments demonstrate that KATS substantially enhances policy performance and achieves state-of-the-art (SOTA) results, especially in demanding scenarios with narrow expert data distributions.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 19398
Loading