TACOS: Temporally-aligned Audio CaptiOnS for Language-Audio Pretraining
By: Paul Primus, Florian Schmid, Gerhard Widmer
Potential Business Impact:
Matches sounds to specific moments in audio.
Learning to associate audio with textual descriptions is valuable for a range of tasks, including pretraining, zero-shot classification, audio retrieval, audio captioning, and text-conditioned audio generation. Existing contrastive language-audio pretrained models are typically trained using global, clip-level descriptions, which provide only weak temporal supervision. We hypothesize that CLAP-like language-audio models - particularly, if they are expected to produce frame-level embeddings - can benefit from a stronger temporal supervision. To confirm our hypothesis, we curate a novel dataset of approximately 12,000 audio recordings from Freesound, each annotated with single-sentence free-text descriptions linked to a specific temporal segment in an audio recording. We use large language models to clean these annotations by removing references to non-audible events, transcribed speech, typos, and annotator language bias. We further propose a frame-wise contrastive training strategy that learns to align text descriptions with temporal regions in an audio recording and demonstrate that our model has better temporal text-audio alignment abilities compared to models trained only on global captions when evaluated on the AudioSet Strong benchmark. The dataset and our source code are available on Zenodo and GitHub, respectively.
Similar Papers
Revisiting Audio-language Pretraining for Learning General-purpose Audio Representation
Audio and Speech Processing
Teaches computers to understand all sounds.
GLAP: General contrastive audio-text pretraining across domains and languages
Sound
Lets computers understand sounds in many languages.
Listening Between the Frames: Bridging Temporal Gaps in Large Audio-Language Models
Sound
Helps computers understand *when* things happen in audio.