Keywords: audio-language pretraining, audio representation learning, audio understanding
Abstract: Audio-language pretraining (ALP) holds promise for learning general-purpose audio representation, yet remains underexplored.
Crucially, there is no consensus on whether audio–language models can build effective general-purpose audio encoders, nor a systematic understanding of how pretraining objectives behave across diverse tasks and scales.
We identify three key barriers: limited scale of audio-text corpora, insufficient caption diversity, and lack of systematic exploration and evaluation.
To fill this gap, we present the first principled empirical study of ALP.
We first introduce CaptionStew, a 10.7M caption dataset aggregating open-source audio-text corpora across multiple domains and captioning focuses.
We then conduct the first comprehensive evaluation comparing contrastive and captioning objectives for learning audio representation across speech, music, and environmental sound tasks.
Our results not only demonstrate that ALP yields competitive, transferable representations, but reveal critical trade-offs: contrastive learning offers superior data efficiency, while captioning exhibits better scalability.
Furthermore, we find that supervised initialization provides diminishing returns at scale, challenging common practices.
By grounding these claims in empirical evidence, we establish a viable pathway toward general-purpose audio representation learning, guiding future research.
Paper Type: Long
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Research Area Keywords: Multimodality and Language Grounding to Vision, Robotics and Beyond,Speech Recognition, Text-to-Speech and Spoken Language Understanding,
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models, Data resources
Languages Studied: English
Submission Number: 5363
Loading