Abstract: 3D facial animation has attracted considerable attention due to its extensive applications in the multimedia field. Audio-driven 3D facial animation has been widely explored with promising results. However, multi-modal 3D facial animation, especially text-guided 3D facial animation is rarely explored due to the lack of multi-modal 3D facial animation dataset. To fill this gap, we first construct a large-scale multi-modal 3D facial animation dataset, MMHead, which consists of 49 hours of 3D facial motion sequences, speech audios, and rich hierarchical text annotations. Each text annotation contains abstract action and emotion descriptions, fine-grained facial and head movements (i.e., expression and head pose) descriptions, and three possible scenarios that may cause such emotion. Concretely, we integrate five public 2D portrait video datasets, and propose an automatic pipeline to 1) reconstruct 3D facial motion sequences from monocular videos; and 2) obtain hierarchical text annotations with the help of AU detection and ChatGPT. Based on the MMHead dataset, we establish benchmarks for two new tasks: text-induced 3D talking head animation and text-to-3D facial motion generation. Moreover, a simple but efficient VQ-VAE-based method named MM2Face is proposed to unify the multi-modal information and generate diverse and plausible 3D facial motions, which achieves competitive results on both benchmarks. Extensive experiments and comprehensive analysis demonstrate the significant potential of our dataset and benchmarks in promoting the development of multi-modal 3D facial animation.
Primary Subject Area: [Content] Multimodal Fusion
Secondary Subject Area: [Generation] Generative Multimedia, [Experience] Multimedia Applications
Relevance To Conference: 3D facial animation has numerous applications in the multimedia field such as AR/VR content creation, games, and film production. Existing methods mainly focus on audio-driven 3D facial animation, while text-guided or multi-modal 3D facial animation is rarely explored, which affects the convenience and flexibility of 3D face animation in multimedia applications. The main reason for the lack of such research is the lack of open-source multi-model 3D facial animation datasets. In this paper, we present the first multi-model 3D facial animation dataset with rich hierarchical text annotations to fill this gap. With the proposed dataset, we also benchmark two relevant tasks, and propose an efficient framework to solve both of the tasks and explore the multi-model fusion strategies for multi-model diverse 3D facial motion generation. We will release our dataset and benchmarks for future research.
Supplementary Material: zip
Submission Number: 3596
Loading