AniHead: Efficient and Animatable 3D Head Avatars Generation

19 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: diffusion model, text-to-3D, 3D head avatar
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We present an innovative, efficient and generalizable 3D head generation pipeline named AniHead, that stands as a significant contribution to the field.
Abstract: Recent advances in diffusion models have led to great progress in generating high-quality 3D shape with textual guidance, especially for 3D head avatars. In spite of the current achievements, the Score Distillation Sampling (SDS) training strategy is too time-consuming for real-time applications. Besides, the implicit representations make these methods generally unavailable in animation. To solve these problems, we present an efficient generalized framework called AniHead, which contains shape and texture modelling respectively for generating animatable 3D head avatars. We propose a novel one-stage shape predicting module driven by parametric FLAME model. As for texture modelling, a conditional diffusion model is finetuned based on the proposed mean texture token. We further introduce a data-free strategy to train our model without collecting large-scale training set. Extensive experiments are conducted to show that our proposed method is not only more efficient than trivial SDS-based methods, but also able to produce high-fidelity and animatable 3D head avatars. The generated assets can be smoothly applied to various downstream tasks such as video and audio based head animation.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1766
Loading