Keywords: Multimodal dialogue dataset, Multimodal conditional dialogue generation, Spoken dialogue generation
TL;DR: We propose an expressive multimodal dialogue dataset with dialogue-level style annotations using an automated pipeline, then introduce explicit and implicit control in multimodal dialogue generation.
Abstract: The recent advancement of Artificial Intelligence Generated Content (AIGC) has led to significant strides in modeling human interaction, particularly in the context of multimodal dialogue.
While current methods impressively generate realistic dialogue in isolated modalities like speech or vision, challenges remain in controllable Multimodal Dialogue Generation (MDG).
This paper focuses on the natural alignment between speech, vision, and text in human interaction, aiming for expressive dialogue generation through multimodal conditional control.
To address the insufficient richness and diversity of dialogue expressiveness in existing datasets, we introduce a novel multimodal dialogue annotation pipeline to curate dialogues from movies and TV series with fine-grained annotations in interactional characteristics.
The resulting MM-Dia dataset (360+ hours, 54,700 dialogues) facilitates explicitly controlled MDG, specifically through style-controllable dialogue speech synthesis.
In parallel, MM-Dia-Bench (309 highly expressive dialogues with visible single-/dual-speaker scenes) serves as a rigorous testbed for implicit cross-modal MDG control, evaluating audio-visual style consistency across modalities.
Extensive experiments demonstrate that training on MM-Dia significantly enhances fine-grained controllability, while benchmarks on MM-Dia-Bench reveal limitations in current frameworks to replicate the nuanced expressiveness of human interaction.
These findings provides new insights and challenges for multimodal conditional dialogue generation.
Primary Area: datasets and benchmarks
Submission Number: 24632
Loading