Keywords: Visuomotor Policy, Adaptive Policy Execution
TL;DR: We identifies the challenges of speeding up the execution of learned visuomotor policies beyond the original demonstration speed, and propose set of design choices to address it.
Abstract: Offline Imitation Learning (IL) methods such as Behavior Cloning (BC) are a simple and effective way to acquire complex robotic manipulation skills. However, existing IL-trained policies are confined to execute the task at the same speed as shown in the demonstration. This limits the task throughput of a robotic system, a critical requirement for applications such as industrial automation. We propose SAIL (Speed-Adaptive Imitation Learning), a framework for enabling faster-than-demonstration execution of policies by addressing key technical challenges in robot dynamics and state-action distribution shifts. SAIL features four tightly-connected components (1) high-gain control to enable high-fidelity tracking of IL policy trajectories, (2) consistency-preserving trajectory generation to ensure smoother robot motion, (3) adaptive speed modulation that dynamically adjusts execution speed based on motion complexity, and (4) action scheduling to handle real-world system latencies. Experimental validation on six robotic manipulation tasks shows that SAIL achieves up to a 4$\times$ speedup over demonstration speed in simulation and up to 3.2$\times$ speedup on physical robots. Video results are shown in https://sail-robot.github.io/.
Submission Number: 14
Loading