Abstract: Movie Dubbing aims to convert scripts into speeches that align with the given movie clip in both temporal and emotional aspects while preserving the vocal timbre of one brief reference audio.
The wide variations in emotion, pace, and environment that dubbed speech must exhibit to achieve real alignment make dubbing a complex task.
Considering the limited scale of the movie dubbing datasets (due to copyright) and the interference from background noise, directly learning from movie dubbing datasets limits the pronunciation quality of learned models.
To address this problem, we propose a two-stage dubbing method that allows the model to first learn pronunciation knowledge before practicing it in movie dubbing.
In the first stage, we introduce a multi-task approach to pre-train a phoneme encoder on a large-scale text-speech corpus for learning clear and natural phoneme pronunciations.
For the second stage, we devise a prosody consistency learning module to bridge the emotional expression with the phoneme-level dubbing prosody attributes (pitch and energy).
Finally, we design a duration consistency reasoning module to align the dubbing duration with the lip movement.
Extensive experiments demonstrate that our method outperforms several state-of-the-art methods on two primary benchmarks.
The source code and model checkpoints will be released to the public.
The demos are available at https://speaker2dubber.github.io/.
Primary Subject Area: [Content] Multimodal Fusion
Secondary Subject Area: [Content] Vision and Language
Relevance To Conference: Movie Dubbing aims to convert scripts into speeches that align with the given movie clip in both temporal and emotional aspects while preserving the vocal timbre of the reference audio.
This task involves text-to-speech conversion, cross-modal prosody modeling, and vocal timbre extraction, making it a complex and comprehensive task spanning three modalities, and has vast application potential in fields such as the film industry and multimedia acoustic engineering.
The development of movie dubbing task benefits many other cross-modal speech-related tasks, such as talking head generation or lip2wav generation.
Supplementary Material: zip
Submission Number: 1863
Loading