MotionTrans: Human VR Data Enable Motion-Level Learning for Robotic Manipulation Policies

Published: 17 Sept 2025, Last Modified: 17 Sept 2025H2R CoRL 2025 WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: human data, motion transfer, cotraining, policy learning
TL;DR: We propose MotionTrans, a framework achieving human-to-robot motion transfer for end-to-end robot policies, even in zero-shot settings. This research is the first systematic effort to validate motion-level end-to-end learning from human data.
Abstract: Scaling real robot data is a key bottleneck in imitation learning, leading to the use of auxiliary data for policy training. While other aspects of robotic manipulation such as image or language understanding may be learned from internet-based datasets, acquiring motion knowledge remains challenging. Human data, with its rich diversity of manipulation behaviors, offers a valuable resource for this purpose. While previous works show that using human data can bring benefits, such as improving robustness and training efficiency, it remains unclear whether it can realize its greatest advantage: **enabling robot policies to directly learn new motions for task completion**. In this paper, we systematically explore this potential through multi-task human-robot cotraining. We introduce **MotionTrans**, a framework that includes a data collection system, a human data transformation pipeline, and a weighted cotraining strategy. By cotraining 30 human-robot tasks simultaneously, we direcly transfer more than 10 motions from human data to deployable end-to-end robot policies. Notably, 9 tasks achieve non-trivial success rates in zero-shot manner. **MotionTrans** also significantly enhances pretraining-finetuning performance (+40% success rate). Through ablation study, we also identify key factors for successful motion learning: cotraining with robot data. These findings unlock the potential of motion-level learning from human data, offering insights into its effective use for training robotic manipulation policies. All data, code, and model weights will be open-sourced.
Submission Number: 30
Loading