Skinning-free Accurate 3D Garment Deformation via Image Transfer

25 Sept 2024 (modified: 15 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: 3D Garment Deformation
Abstract: 3D garment animation is key to a wide range of applications including digital humans, virtual try-on, and extended reality. This paper addresses the task of predicting 3D garment deformation from a posed body mesh. Existing learning-based methods mostly rely on linear blend skinning to decompose garment deformation into low-frequency posed garment shape and high-frequency wrinkles. However, due to the lack of explicit skinning supervision, they often produce misaligned garment positions with undesired artifacts during garment re-posing, which corrupt the high-frequency signals. These skinning-based methods consequently fail to recover accurate wrinkle patterns. To tackle this issue, we present a skinning-free approach that re-formulates the high-low frequency decomposition by estimating posed (i) vertex position for low-frequency posed garment shape, and (ii) vertex normal for high-frequency local wrinkle details. In this way, each frequency modality can be effectively decoupled and directly supervised by the geometry of the deformed garment. Moreover, we propose to encode both vertex attributes as texture images, so that 3D garment deformation can be equivalently achieved via 2D image transfer. This enables us to leverage powerful pretrained image encoders to recover high-fidelity visual details representing fine wrinkles. In addition, we model body-garment interaction via cross-attention between dense body and garment image patches, which refines the naive skinning on sparse joints. Finally, we propose a multimodal fusion to incorporate constraints from both frequency modalities and optimize deformed 3D garments from transferred images. Extensive experiments show that our method significantly improves deformation accuracy on various garment types and recovers finer wrinkles than state-of-the-art methods.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4757
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview