GLAP: General contrastive audio-text pretraining across domains and languages
By: Heinrich Dinkel , Zhiyong Yan , Tianzi Wang and more
Potential Business Impact:
Lets computers understand sounds in many languages.
Contrastive Language Audio Pretraining (CLAP) is a widely-used method to bridge the gap between audio and text domains. Current CLAP methods enable sound and music retrieval in English, ignoring multilingual spoken content. To address this, we introduce general language audio pretraining (GLAP), which expands CLAP with multilingual and multi-domain abilities. GLAP demonstrates its versatility by achieving competitive performance on standard audio-text retrieval benchmarks like Clotho and AudioCaps, while significantly surpassing existing methods in speech retrieval and classification tasks. Additionally, GLAP achieves strong results on widely used sound-event zero-shot benchmarks, while simultaneously outperforming previous methods on speech content benchmarks. Further keyword spotting evaluations across 50 languages emphasize GLAP's advanced multilingual capabilities. Finally, multilingual sound and music understanding is evaluated across four languages. Checkpoints and Source: https://github.com/xiaomi-research/dasheng-glap.
Similar Papers
Spatial-CLAP: Learning Spatially-Aware audio--text Embeddings for Multi-Source Conditions
Sound
Helps computers know where sounds come from.
TACOS: Temporally-aligned Audio CaptiOnS for Language-Audio Pretraining
Audio and Speech Processing
Matches sounds to specific moments in audio.
Revisiting Audio-language Pretraining for Learning General-purpose Audio Representation
Audio and Speech Processing
Teaches computers to understand all sounds.