Abstract: We present a novel framework for talking head video editing, allowing users to freely edit head pose, emotion, and eye blink while maintaining audio-visual synchronization. Unlike previous approaches that mainly focus on generating a talking head video, our proposed model is able to edit the talking heads of an input video and restore it to full frames, which supports a broader range of applications. Our proposed framework consists of two parts: a) a reconstruction-based generator that can generate talking heads fitting to the original frame while corresponding to freely controllable attributes, including head pose, emotion, and eye blink. b) a multiple-attribute discriminator that enforces attribute-visual synchronization. We additionally introduce attention modules and perceptual loss to improve the overall generation quality. We compare existing approaches as corroborated by quantitative metrics and qualitative comparisons.
Loading