ListenFormer: Responsive Listening Head Generation with Non-autoregressive Transformers

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: As one of the crucial elements in human-robot interaction, responsive listening head generation has attracted considerable attention from researchers. It aims to generate a listening head video based on speaker's audio and video as well as a reference listener image. However, existing methods exhibit two limitations: 1) the generation capability of their models is limited, resulting in generated videos that are far from real ones, and 2) they mostly employ autoregressive generative models, unable to mitigate the risk of error accumulation. To tackle these issues, we propose Listenformer that leverages the powerful temporal modeling capability of transformers for generation. It can perform non-autoregressive prediction with the proposed two-stage training method, simultaneously achieving temporal continuity and overall consistency in the outputs. To fully utilize the information from the speaker inputs, we designed an audio-motion attention fusion module, which improves the correlation of audio and motion features for accurate response. Additionally, a novel decoding method called sliding window with a large shift is proposed for Listenformer, demonstrating both excellent computational efficiency and effectiveness. Extensive experiments show that Listenformer outperforms the existing state-of-the-art methods on ViCo and L2L datasets. And a perceptual user study demonstrates the comprehensive performance of our method in generating diversity, identity preserving, speaker-listener synchronization, and attitude matching.
Primary Subject Area: [Generation] Generative Multimedia
Relevance To Conference: Responsive listening head generation refers to the ability of a system, typically an AI or a robot, to generate appropriate responses based on the analysis of multimodal inputs, such as speech, facial expressions, and other sensory data. This capability is crucial in enabling effective communication and interaction between humans and machines, especially in multimedia-rich environments. In our work, the proposed ListenFormer can generate more natural and diverse listener videos conditioned on speaker's audio and visual features, greatly improving the listening response of multimodal virtual character generation. Moreover, a novel audio-visual fusion method is introduced to explore the cross-modal fusion problem in the responsive listening head generation task. In addition to modeling everyday scenarios, our work holds great potential in multimedia scenarios such as VR, gaming, and film production to design synthesized listeners.
Supplementary Material: zip
Submission Number: 2920
Loading