D-Garment: Physically Grounded Latent Diffusion for Dynamic Garment Deformations

TMLR Paper6982 Authors

12 Jan 2026 (modified: 20 Jan 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We present a method to dynamically deform 3D garments, in the form of 3D polygon mesh, based on body shape, motion, and physical cloth material properties. Considering physical cloth properties allows to learn a physically grounded model, with the advantage of being more accurate in terms of physically inspired metrics such as strain or curvature. Existing work studies pose-dependent garment modeling to generate garment deformations from example data, and possible data-driven dynamic cloth simulation to generate realistic garments in motion. We propose *D-Garment*, a learning-based approach trained on new data generated with a physics-based simulator. Compared to prior work, our 3D generative model learns garment deformations conditioned by physical material properties, which allows to model loose cloth geometry, especially for large deformations and dynamic wrinkles driven by body motion. Furthermore, the model can be efficiently fitted to observations captured using vision sensors such as 3D point clouds. We leverage the capability of diffusion models to learn flexible and powerful generative priors by modeling the 3D garment in a 2D parameter space, and learning a latent diffusion model using this representation independently from the mesh resolution. This allows to condition global and local geometry with body and cloth material information. We quantitatively and qualitatively evaluate *D-Garment* on both simulations and data captured with a multi-view acquisition platform. Compared to recent baselines our method is more realistic and accurate in terms of shape similarity and physical validity metrics. Code and data will be shared upon acceptance.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Mathieu_Salzmann1
Submission Number: 6982
Loading