We show additional result comparisons between our method and four baseline methods: MDM, MotionLCM-V2, MLD++, and MARDM. Our method generates motion that is more realistic and more accurately follows the fine details of the textual condition.
Our method is capable of generating high-quality, textual instruction-following 3D human motions.
We include additional 9 distinct motion examples generated by our method.
We show result comparisons between our method and two baseline methods: OmniControl and MotionLCM-V2 with ControlNet.
Our method generates motion much faster (2.51 second) and near-flawlessly follows the user-provided controling signal.
Our method is capable of generating high-quality 3D human motions following textual instruction and control signals.
We include additional 12 distinct motion examples generated by our method (2 for each controlled joint).
Our method is capable of spatially editing 3D human motions. We include additional 2 motion examples generated by our method.
We show additional result comparisons between our direct SMPL-H mesh generation method and generated-joints-to-SMPL-H meshes through a SMPL fitting model approach.
Our direct SMPL-H mesh generation method produces more realistic SMPL-H mesh vertices motions and better captures natural human movement.
Our method is capable of directly generating high-quality, textual instruction-following SMPL-H mesh vertices motions.
We include additional 9 distinct motion examples generated by our method.