Spatial-CLAP: Learning Spatially-Aware audio--text Embeddings for Multi-Source Conditions
By: Kentaro Seki , Yuki Okamoto , Kouei Yamaoka and more
Potential Business Impact:
Helps computers know where sounds come from.
Contrastive language--audio pretraining (CLAP) has achieved remarkable success as an audio--text embedding framework, but existing approaches are limited to monaural or single-source conditions and cannot fully capture spatial information. The central challenge in modeling spatial information lies in multi-source conditions, where the correct correspondence between each sound source and its location is required. To tackle this problem, we propose Spatial-CLAP, which introduces a content-aware spatial encoder that enables spatial representations coupled with audio content. We further propose spatial contrastive learning (SCL), a training strategy that explicitly enforces the learning of the correct correspondence and promotes more reliable embeddings under multi-source conditions. Experimental evaluations, including downstream tasks, demonstrate that Spatial-CLAP learns effective embeddings even under multi-source conditions, and confirm the effectiveness of SCL. Moreover, evaluation on unseen three-source mixtures highlights the fundamental distinction between conventional single-source training and our proposed multi-source training paradigm. These findings establish a new paradigm for spatially-aware audio--text embeddings.
Similar Papers
Hearing and Seeing Through CLIP: A Framework for Self-Supervised Sound Source Localization
CV and Pattern Recognition
Finds sounds in videos using AI.
GLAP: General contrastive audio-text pretraining across domains and languages
Sound
Lets computers understand sounds in many languages.
Refining CLIP's Spatial Awareness: A Visual-Centric Perspective
CV and Pattern Recognition
Helps computers understand pictures and where things are.