Visual Imitation Enables Contextual Humanoid Control

RSS 2025 Workshop EgoAct Submission15 Authors

20 May 2025 (modified: 10 Jun 2025)RSS 2025 Workshop EgoAct SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Visual Imitation, Humanoids, Reinforcement Learning, Reconstruction, Real2Sim2Real
TL;DR: Reconstruct human + env from video, train a policy to control humanoid to do all those skills in real.
Abstract: How can we teach humanoids to climb staircases and sit on chairs using the surrounding environment context? Arguably the simplest way is to just show them—casually capture a human motion video and feed it to humanoids. We introduce **VideoMimic**, a real-to-sim-to-real pipeline that mines everyday videos, jointly reconstructs the humans and the environment, and produces whole-body control policies for humanoid robots that perform the corresponding skills. We demonstrate the results of our pipeline on real humanoid robots, showing robust, repeatable contextual control such as staircase ascents and descents, sitting and standing from chairs and benches, as well as other dynamic whole-body skills all from a single policy, conditioned on the environment and global root commands. We hope our data and approach help enable a scalable path towards teaching humanoids to operate in diverse real-world environments.
Submission Number: 15
Loading