Abstract: Recently, the AI community has made significant strides in developing powerful foundation models, driven by large-scale multimodal datasets. However, for audio representation learning, the present datasets suffer from limitations in the following aspects: insufficient volume, simplistic content, and arduous collection procedures. To establish an audio dataset with high-quality captions, we propose an innovative, automatic approach leveraging multimodal inputs, such as video frames, audio streams. Specifically, we construct a large-scale, high-quality, audio-language dataset, named as Auto-ACD, comprising over 1.5M audio-text pairs. We exploit a series of pre-trained models or APIs, to determine audio-visual synchronisation, generate image captions, object detection, or audio tags for specific videos. Subsequently, we employ LLM to paraphrase a congruent caption for each audio, guided by the extracted multi-modality clues. To demonstrate the effectiveness of the proposed dataset, we train widely used models on our dataset and show performance improvement on various downstream tasks, namely, audio-language retrieval, audio captioning, zero-shot classification. In addition, we establish a novel benchmark with environmental information and provide a benchmark for audio-text tasks.
Primary Subject Area: [Content] Media Interpretation
Secondary Subject Area: [Generation] Multimedia Foundation Models
Relevance To Conference: In this work, we propose an innovative, automatic approach leveraging multimodal inputs, and construct a large-scale, high-quality, audio-language dataset. This work addresses the limitations of current audio-language datasets, significantly promotes downstream tasks, and aligns well with the multimedia focus.
Supplementary Material: zip
Submission Number: 4160
Loading