Abstract: Speech-language multi-modal learning presents a significant challenge due to the fine nuanced information inherent in speech styles. Therefore, a large-scale dataset providing elaborate comprehension of speech style is urgently needed to facilitate insightful interplay between speech audio and natural language. However, constructing such datasets presents a major trade-off between large-scale data collection and high-quality annotation. To tackle this challenge, we propose an automatic speech annotation system for expressiveness interpretation that annotates in-the-wild speech clips with expressive and vivid human language descriptions. Initially, speech audios are processed by a series of expert classifiers and captioning models to capture diverse speech characteristics, followed by a fine-tuned LLaMA for customized annotation generation. Unlike previous tag/templet-based annotation frameworks with limited information and diversity, our system provides in-depth understandings of speech style through tailored natural language descriptions, thereby enabling accurate and voluminous data generation for large model training. With this system, we create SpeechCraft, a fine-grained bilingual expressive speech dataset. It is distinguished by highly descriptive natural language style prompts, containing approximately 2,000 hours of audio data and encompassing over two million speech clips. Extensive experiments demonstrate that the proposed dataset significantly boosts speech-language task performance in both stylist speech synthesis and speech style understanding.
Primary Subject Area: [Content] Media Interpretation
Secondary Subject Area: [Content] Multimodal Fusion
Relevance To Conference: Our manuscript explores Expressive Speech Data, positioning it at the forefront of speech technology, a pivotal element in the domain of multimedia information. In more detail, our study delves into the nuanced realms of speech content understanding and its subsequent interpretation, highlighting its relevance through several key aspects:
1. We introduce an Automatic Speech Annotation System designed to enrich speech data with natural and expressive descriptions. This approach requires a deep analysis and comprehension of speech as a form of multimedia information and represents a valuable effort to extend audio data into other modalities within the multimedia field.
2. We also present the SpeechCraft dataset, which features approximately 2,000 hours of audio across over two million bilingual clips. This dataset is expected to significantly impact tasks related to multimedia content understanding and generation.
3. Leveraging the strengths of the proposed dataset, we pioneer two novel speech-language tasks: fine-grained expressive speech synthesis and automated speech-style captioning. These initiatives significantly broaden the scope of research and application within the multimedia field.
Overall, our contributions align with the ACM MM conference's focus, offering innovative approaches to the interpretation and comprehension of multimedia.
Supplementary Material: zip
Submission Number: 5510
Loading