Score: 1

Learning What To Hear: Boosting Sound-Source Association For Robust Audiovisual Instance Segmentation

Published: September 26, 2025 | arXiv ID: 2509.22740v1

By: Jinbae Seo , Hyeongjun Kwon , Kwonyoung Kim and more

Potential Business Impact:

Helps computers see and hear objects together.

Business Areas:
Image Recognition Data and Analytics, Software

Audiovisual instance segmentation (AVIS) requires accurately localizing and tracking sounding objects throughout video sequences. Existing methods suffer from visual bias stemming from two fundamental issues: uniform additive fusion prevents queries from specializing to different sound sources, while visual-only training objectives allow queries to converge to arbitrary salient objects. We propose Audio-Centric Query Generation using cross-attention, enabling each query to selectively attend to distinct sound sources and carry sound-specific priors into visual decoding. Additionally, we introduce Sound-Aware Ordinal Counting (SAOC) loss that explicitly supervises sounding object numbers through ordinal regression with monotonic consistency constraints, preventing visual-only convergence during training. Experiments on AVISeg benchmark demonstrate consistent improvements: +1.64 mAP, +0.6 HOTA, and +2.06 FSLA, validating that query specialization and explicit counting supervision are crucial for accurate audiovisual instance segmentation.

Repos / Data Links

Page Count
6 pages

Category
Electrical Engineering and Systems Science:
Audio and Speech Processing