AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding
Abstract: The paper introduces AniTalker, an innovative framework designed to generate lifelike talking faces from a single portrait. Unlike existing models that primarily focus on verbal cues such as lip synchronization and fail to capture the complex dynamics of facial expressions and nonverbal cues, AniTalker employs a universal motion representation. This innovative representation effectively captures a wide range of facial dynamics, including subtle expressions and head movements. AniTalker enhances motion depiction through two self-supervised learning strategies: the first involves reconstructing target video frames from source frames within the same identity to learn subtle motion representations, and the second develops an identity encoder using metric learning while actively minimizing mutual information between the identity and motion encoders. This approach ensures that the motion representation is dynamic and devoid of identity-specific details, significantly reducing the need for labeled data. Additionally, the integration of a diffusion model with a variance adapter allows for the generation of diverse and controllable facial animations. This method not only demonstrates AniTalker’s capability to create detailed and realistic facial movements but also underscores its potential in crafting dynamic avatars for real-world applications. Synthetic results can be viewed at https://anitalker.github.io.
Primary Subject Area: [Generation] Generative Multimedia
Secondary Subject Area: [Experience] Multimedia Applications
Relevance To Conference: This work makes important contributions to multimedia/multimodal processing in the following ways: (1) It proposes a novel framework called AniTalker that can generate lifelike talking faces from a single portrait image by integrating speech/audio signals. This enables realistic multimedia content creation by seamlessly combining visual and auditory modalities. (2) The key innovation is a universal motion encoder that can disentangle and capture facial dynamics, including verbal (lip movements) and non-verbal (expressions, head movements) cues, in a unified representation. This motion representation can be generalized across different identities, allowing multimodal animation of diverse human faces. (3) The framework leverages self-supervised learning on large video datasets to learn the complex mappings between audio and facial movements without requiring laborious manual annotations. This data-driven approach enables scalable multimodal modeling. (4) AniTalker employs techniques like diffusion models and variance adapters to generate diverse and controllable facial animations driven by input speech signals. This allows flexible multimodal synthesis and manipulation of the generated talking avatars. (5) Extensive evaluations demonstrate AniTalker's superiority over previous methods in generating high-fidelity, naturalistic talking faces with accurate lip-syncing and expressive motions, constituting a significant advancement in multimodal human-avatar interaction and content creation.
Supplementary Material: zip
Submission Number: 2984
Loading