Generating Velocity-Adaptive Manipulation Through Learning from Human Movement Speed Variations

Published: 2025, Last Modified: 25 Jan 2026Humanoids 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We propose an imitation learning framework that enables robots to acquire speed-adaptive manipulation skills from expert demonstrations exhibiting diverse motion velocities. Traditional imitation learning methods often assume temporally aligned, uniformly paced demonstrations, leading to policies constrained to a narrow range of execution speeds. However, real-world human demonstrations, particularly those collected in-the-wild without explicit instruction, naturally vary in motion speed. When trained on such data, conventional behavior cloning tends to produce averaged actions, often resulting in unreliable task execution. To address this limitation, we introduce a novel approach that conditions the policy on target motion velocities, enabling the generation of task-consistent actions at desired speeds. This capability is especially important for human-robot interaction, where the robot must flexibly adapt to human behavior. We evaluate the proposed method on object manipulation tasks, including grasp-and-place scenarios, in both simulated and real-world environments. Experimental results show that our method effectively learns speed-adaptive policies that generalize across a wide range of target velocities, outperforming standard imitation learning baselines.
Loading