Learning What To Hear: Boosting Sound-Source Association For Robust Audiovisual Instance Segmentation
By: Jinbae Seo , Hyeongjun Kwon , Kwonyoung Kim and more
Potential Business Impact:
Helps computers see and hear objects together.
Audiovisual instance segmentation (AVIS) requires accurately localizing and tracking sounding objects throughout video sequences. Existing methods suffer from visual bias stemming from two fundamental issues: uniform additive fusion prevents queries from specializing to different sound sources, while visual-only training objectives allow queries to converge to arbitrary salient objects. We propose Audio-Centric Query Generation using cross-attention, enabling each query to selectively attend to distinct sound sources and carry sound-specific priors into visual decoding. Additionally, we introduce Sound-Aware Ordinal Counting (SAOC) loss that explicitly supervises sounding object numbers through ordinal regression with monotonic consistency constraints, preventing visual-only convergence during training. Experiments on AVISeg benchmark demonstrate consistent improvements: +1.64 mAP, +0.6 HOTA, and +2.06 FSLA, validating that query specialization and explicit counting supervision are crucial for accurate audiovisual instance segmentation.
Similar Papers
OpenAVS: Training-Free Open-Vocabulary Audio Visual Segmentation with Foundational Models
Machine Learning (CS)
Lets computers find sounds in videos.
From Waveforms to Pixels: A Survey on Audio-Visual Segmentation
CV and Pattern Recognition
Helps computers find sounds and objects in videos.
Sounding that Object: Interactive Object-Aware Image to Audio Generation
CV and Pattern Recognition
Makes sounds match chosen objects in pictures.