Keywords: Imitation Learning, Data Scaling Law, Human Intervention
TL;DR: We introduce RaC, a human-in-the-loop data collection protocol that scales recovery and correction data to overcome imitation learning's performance plateaus, enabling robots to master long-horizon tasks with more efficient data scaling.
Abstract: Modern paradigms for robot imitation train expressive policy architectures on large amounts of human demonstration data. Yet performance on contact-rich, deformable-object, and long-horizon tasks plateau far below perfect execution, even with thousands of expert demonstrations. This is due to the inefficiency of existing ``expert'' data collection procedures based on human teleoperation. To address this issue, we introduce RaC, a new phase of training on human-in-the-loop rollouts after imitation learning pre-training. In RaC, we fine-tune a robotic policy on human intervention trajectories that illustrate recovery and correction behaviors. Specifically, during a policy rollout, human operators intervene when failure appears imminent, first rewinding the robot back to a familiar, in-distribution state and then providing a corrective segment that completes the current sub-task. Training on this data composition expands the robotic skill repertoire to include retry and adaptation behaviors, which we show are crucial for boosting both efficiency and robustness on long-horizon tasks. Across three real-world bimanual control tasks: shirt hanging, airtight container lid sealing, takeout box packing, and a simulated assembly task, RaC outperforms the prior state-of-the-art using 10$\times$ fewer data points. We also show that RaC enables test-time scaling: the performance of the RaC policy scales linearly in the number of recovery maneuvers it exhibits. Videos are available at https://rac-scaling-robot.github.io.
Lightning Talk Video: mp4
Submission Number: 37
Loading