MambaGesture: Enhancing Co-Speech Gesture Generation with Mamba and Disentangled Multi-Modality Fusion
Abstract: Co-speech gesture generation is crucial for producing synchronized and realistic human gestures that accompany speech, enhancing the animation of lifelike avatars in virtual environments. While diffusion models have shown impressive capabilities, current approaches often overlook a wide range of modalities and their interactions, resulting in less dynamic and contextually varied gestures. To address these challenges, we present MambaGesture, a novel framework integrating a Mamba-based attention block, MambaAttn, with a multi-modality feature fusion module, SEAD. The MambaAttn block combines the sequential data processing strengths of the Mamba model with the contextual richness of attention mechanisms, enhancing the temporal coherence of generated gestures. SEAD adeptly fuses audio, text, style, and emotion modalities, employing disentanglement to deepen the fusion process and yield gestures with greater realism and diversity. Our approach, rigorously evaluated on the multi-modal BEAT dataset, demonstrates significant improvements in Fréchet Gesture Distance (FGD), diversity scores, and beat alignment, achieving state-of-the-art performance in co-speech gesture generation.
Primary Subject Area: [Generation] Generative Multimedia
Secondary Subject Area: [Experience] Multimedia Applications, [Content] Multimodal Fusion, [Engagement] Emotional and Social Signals
Relevance To Conference: This work presents a substantial advancement in multimedia and multimodal processing by enhancing the synthesis of co-speech gestures from multimodal data, including audio. Our MambaGesture framework, with its novel integration of the Mamba model and SEAD fusion module, pushes the boundaries of generating natural, diverse, and synchronized gestures, thereby may enriching user experiences in virtual environments, video games, and animation. By addressing the challenges of multimodal data fusion and sequence modeling, this research contributes to the development of more sophisticated multimedia systems capable of processing and synchronization across various modalities. The resulting coherent and dynamic gesture generation aligns closely with the goals of ACM Multimedia, fostering innovations that bridge human-computer interaction and multimedia content creation..
Supplementary Material: zip
Submission Number: 1278
Loading