Score: 0

Revisiting Audio-language Pretraining for Learning General-purpose Audio Representation

Published: November 20, 2025 | arXiv ID: 2511.16757v1

By: Wei-Cheng Tseng , Xuanru Zhou , Mingyue Huo and more

Potential Business Impact:

Teaches computers to understand all sounds.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Audio-language pretraining holds promise for general-purpose audio understanding, yet remains underexplored compared to its vision counterpart. While vision-language models like CLIP serve as widely adopted foundations, existing audio-language models primarily excel at retrieval tasks with limited adoption as general-purpose encoders. We identify three key barriers: limited large-scale audio-text corpora, insufficient caption diversity, and lack of systematic exploration and evaluation. To this end, we introduce CaptionStew, a 10.7M caption dataset aggregating diverse open-source audio-text corpora across multiple domains and captioning styles. Using this resource, we conduct the first comprehensive evaluation comparing contrastive and captioning objectives for audio representation learning across speech, music, and environmental sound tasks. Our results demonstrate that audio-language pretraining yields competitive, transferable representations. Through systematic data-scaling experiments, we reveal complementary objective strengths: contrastive learning achieves superior data efficiency at smaller scales, while captioning demonstrates better scalability on language-involved audio understanding tasks. We also find that common supervised initialization practices provide diminishing returns at scale, challenging current approaches. These findings establish audio-language pretraining as a viable pathway toward general-purpose audio representations, guiding future research. To accelerate progress, we release data preparation recipes, training protocols, and pretrained models, paving the way toward universal audio understanding.

Country of Origin
🇺🇸 United States

Page Count
22 pages

Category
Electrical Engineering and Systems Science:
Audio and Speech Processing