Learning from Silence and Noise for Visual Sound Source Localization
By: Xavier Juanola , Giovana Morais , Magdalena Fuentes and more
Potential Business Impact:
Helps computers find sounds in videos.
Visual sound source localization is a fundamental perception task that aims to detect the location of sounding sources in a video given its audio. Despite recent progress, we identify two shortcomings in current methods: 1) most approaches perform poorly in cases with low audio-visual semantic correspondence such as silence, noise, and offscreen sounds, i.e. in the presence of negative audio; and 2) most prior evaluations are limited to positive cases, where both datasets and metrics convey scenarios with a single visible sound source in the scene. To address this, we introduce three key contributions. First, we propose a new training strategy that incorporates silence and noise, which improves performance in positive cases, while being more robust against negative sounds. Our resulting self-supervised model, SSL-SaN, achieves state-of-the-art performance compared to other self-supervised models, both in sound localization and cross-modal retrieval. Second, we propose a new metric that quantifies the trade-off between alignment and separability of auditory and visual features across positive and negative audio-visual pairs. Third, we present IS3+, an extended and improved version of the IS3 synthetic dataset with negative audio. Our data, metrics and code are available on the https://xavijuanola.github.io/SSL-SaN/.
Similar Papers
Improving Sound Source Localization with Joint Slot Attention on Image and Audio
CV and Pattern Recognition
Finds where sounds come from in pictures.
Latent Multi-view Learning for Robust Environmental Sound Representations
Sound
Helps computers understand sounds better by learning from noise.
Hearing and Seeing Through CLIP: A Framework for Self-Supervised Sound Source Localization
CV and Pattern Recognition
Finds sounds in videos using AI.