EmbodMocap: In-the-Wild 4D Human-Scene Reconstruction for Embodied Agents

Published: 26 Feb 2026, Last Modified: 02 Apr 2026OpenReview Archive Direct UploadEveryonearXiv.org perpetual, non-exclusive license
Abstract: Human behaviors in the real world naturally encode rich, long-term contextual information that can be leveraged to train embodied agents for perception, understanding, and acting. However, existing capture systems typically rely on costly studio setups and wearable devices, limiting the large-scale collection of scene-conditioned human motion data in the wild. To address this, we propose EmbodMocap, a portable and affordable data collection pipeline using two moving iPhones. Our key idea is to jointly calibrate dual RGB-D sequences to reconstruct both humans and scenes within a unified metric world coordinate frame. The proposed method allows metric-scale and scene-consistent capture in everyday environments without static cameras or markers, bridging human motion and scene geometry seamlessly. Compared with optical capture ground truth, we demonstrate that the dual-view setting exhibits a remarkable ability to mitigate depth ambiguity, achieving superior alignment and reconstruction performance over single iPhone or monocular models. Based on the collected data, we empower three embodied AI tasks: monocular human-scene reconstruction, physics-based character animation, and robot motion control. Experimental results validate the effectiveness of our pipeline and its contributions towards advancing embodied AI research.
Loading