Revisiting Audio-language Pretraining for Learning General-purpose Audio Representation

19 Sept 2025 (modified: 05 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: audio-language pretraining, audio representation learning, audio understanding
Abstract: Audio-language pretraining holds promise for leraning general-purpose audio representation, yet remains underexplored compared to its vision counterpart. Crucially, there is no consensus on whether audio–language models can build effective general-purpose audio encoders, nor a systematic understanding of how pretraining objectives behave across diverse audio processing tasks and scales. We identify three key barriers: limited large-scale audio-text corpora, insufficient caption diversity, and lack of systematic exploration and evaluation. To fill this gap, we present the first principled empirical study of audio–language pretraining. To this end, we introduce CaptionStew, a 10.7M caption dataset aggregating diverse open-source audio-text corpora across multiple domains and captioning styles. Using this resource, we conduct the first comprehensive evaluation comparing contrastive and captioning objectives for audio representation learning across speech, music, and environmental sound tasks. Our results not only demonstrate that audio-language pretraining yields competitive, transferable representations, but also reveal critical trade-offs: contrastive learning offers superior data efficiency, while captioning exhibits better scalability. Furthermore, we find that supervised initialization provides diminishing returns at scale, challenging common practices. By grounding these claims in empirical evidence, we establish a viable pathway toward general-purpose audio representation learning, guiding future research. To accelerate progress, we will release data preparation recipes, training protocols, and pretrained models.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 15785
Loading