A Multi-resolution Dataset of Self-consistent Cloth Drapes for Physics-based Upsampling

20 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: datasets and benchmarks
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Cloth simulation, physics-based upsampling, neural upsampling.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: The high-fidelity simulation of draped cloth is a crucial tool across a wide range of applications that span the gamut from immersive virtual experiences to accurate digital modeling. However, capturing the finely detailed folds, creases, contacts and wrinkles of a cloth at equilibrium requires expensive, high-resolution simulation. To side-step these intensive computation requirements, data-driven methods have long attempted to directly upsample cheap-to-generate, low-resolution, coarse-cloth drapes with rich, physically realistic cloth-drape details. However, progress in these "physics upsampling" methods is significantly stymied by the lack of suitable data to capture both the intricate details of cloth physics and, just as important, to provide consistent, artifact-free, multi-resolution models of draping to learn the changes and correspondences across resolutions. Existing cloth simulators (both industrial and academic) generally fail to capture accurate draping behavior of real-world materials, lack the resolution and fidelity required for producing fine-scale cloth wrinkles, struggle with accurately resolving detailed cloth self-collision, and, do not provide consistent draping geometries for simulations as we vary input model resolution. At the same time, consistent and meaningful quantitative metrics for evaluating the success of physics-based upsampling methods have also been missing. To address these gaps, we introduce a large-scale dataset specifically designed for cloth-drape upsampling, built with the recently developed "Progressive Cloth Simulation" (PCS) method and a new set of carefully constructed benchmark evaluation metrics. PCS enables us to generate a dataset of multi-resolution tuples of corresponding cloth drapes, with drape consistency across resolution levels, over a diverse range of real-world cloth material parameters. Geometries at all resolutions are robustly interpenetration-free (a critical and necessary feature for high-quality cloth modeling), with increasingly finer details ending in the highest-resolution models corresponding to high-fidelity, completely unconstrained and fully converged cloth simulation output. Our dataset spans a wide range of diverse cloth configurations by collating over one million total simulated meshes constructed via careful parameterization across important input drape configuration variations. We provide geometric analyses of our dataset and benchmark five existing upsampling methods for cloth upsampling under various settings. To quantify performance, we introduce a new set of geometric and physical evaluation metrics. Here, as we show in our analyses, the high-fidelity cloth draping, introduced in this dataset, immediately exposes severe limitations in existing methods which are challenged by both the complex contact behaviors and the real-world cloth material properties demonstrated. Recognizing these gaps in existing methods regarding collision objects and material properties, we further develop and benchmark a new, learning-based baseline method for comparison. Extensive experimental results demonstrate the effectiveness, as well as the important added real-world complexity of our dataset. Its self-consistent models and the intricate high-resolution cloth details provide an important yet challenging benchmark, calling on future research in specialized model designs for data-driven cloth upsampling and simulation. A subset of our dataset is available at https://cloth-drape-dataset.github.io/.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2916
Loading