NeuWigs: A Neural Dynamic Model for Volumetric Hair Capture and Animation

Published: 01 Jan 2023, Last Modified: 13 Nov 2024CVPR 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The capture and animation of human hair are two of the major challenges in the creation of realistic avatars for the virtual reality. Both problems are highly challenging, because hair has complex geometry and appearance and exhibits challenging motion. In this paper, we present a two-stage approach that models hair independently of the head to address these challenges in a data-driven manner. The first stage, state compression, learns a low-dimensional latent space of 3D hair states including motion and appearance via a novel autoencoder-as-a-tracker strategy. To better disentangle the hair and head in appearance learning, we employ multi-view hair segmentation masks in combination with a differentiable volumetric renderer. The second stage optimizes a novel hair dynamics model that performs temporal hair transfer based on the discovered latent codes. To enforce higher stability while driving our dynamics model, we employ the 3D point-cloud autoencoder from the compression stage for denoising of the hair state. Our model outperforms the state of the art in novel view synthesis and is capable of creating novel hair animations without relying on hair observations as a driving signal. † † Project page at https://ziyanwl.github.io/neuwigs/.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview