Unsupervised Real-Time Garment Deformation Prediction Driven by Human Body Pose and Shape

Xinru Zhuo, Min Shi, Dengming Zhu, Guoqing Han, Zhaoxin Li

Published: 2024, Last Modified: 08 Apr 2026CGI (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Virtual character dressing deformation simulation is widely used in digital filmmaking, 3D gaming, animation, and metaverse construction to generate realistic dressing deformations and animations based on human body shapes and poses. Data-driven methods, compared to physically driven ones, offer advantages such as ease of control, speed, and data reusability, making them increasingly mainstream. However, they are often time-consuming and expensive, unsuitable for rapid iteration in current dressing animation. We present an unsupervised clothing deformation prediction model suitable for various body shapes and poses. Our method enables network training without a clothing deformation dataset by converting physical constraints into optimization objectives. Using a variational autoencoder with an encoder-decoder structure, we map body parameters (pose and shape) to clothing deformation. The reparameterisation module learns the latent space conditional probability distribution model from body features to clothing deformation. The feature-deformation transformation space is then learned to convert encoded vectors of different body features into corresponding clothing deformation vertex sets. Experimental results show that our model can be trained quickly without a clothing deformation dataset, even on a CPU, and can rapidly synthesize realistic clothing animation effects based on given body parameters, excelling in prediction speed and minimizing penetration loss.
Loading