EgoVLA: Learning Vision-Language-Action Models from Egocentric Human Videos

08 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: vision-language-action-model, VLA, manipulation, robotics, human_video, EgoCentric, egocentric
Abstract: Real robot data collection for imitation learning has led to significant advances in robotic manipulation. However, the requirement for robot hardware in the process fundamentally constrains the scale of the data. In this paper, we explore training Vision-Language-Action (VLA) models using egocentric human videos. The benefit of using human videos is not only for their scale but more importantly for the richness of scenes and tasks. With a VLA trained on human video that predicts human wrist and hand actions, we can perform Inverse Kinematics and retargeting to convert the human actions to robot actions. We fine-tune the model using a few robot manipulation demonstrations to obtain the robot policy, namely EgoVLA. We propose a simulation benchmark called Ego Humanoid Manipulation Benchmark, where we design diverse bimanual manipulation tasks with demonstrations. We fine-tune and evaluate EgoVLA with \benchmarkName and show significant improvements over baselines and ablate the importance of human data
Primary Area: applications to robotics, autonomy, planning
Submission Number: 2899
Loading