Learning Visual Affordance from Audio
By: Lidong Lu , Guo Chen , Zhu Wei and more
Potential Business Impact:
Lets robots understand objects by hearing them.
We introduce Audio-Visual Affordance Grounding (AV-AG), a new task that segments object interaction regions from action sounds. Unlike existing approaches that rely on textual instructions or demonstration videos, which often limited by ambiguity or occlusion, audio provides real-time, semantically rich, and visually independent cues for affordance grounding, enabling more intuitive understanding of interaction regions. To support this task, we construct the first AV-AG dataset, comprising a large collection of action sounds, object images, and pixel-level affordance annotations. The dataset also includes an unseen subset to evaluate zero-shot generalization. Furthermore, we propose AVAGFormer, a model equipped with a semantic-conditioned cross-modal mixer and a dual-head decoder that effectively fuses audio and visual signals for mask prediction. Experiments show that AVAGFormer achieves state-of-the-art performance on AV-AG, surpassing baselines from related tasks. Comprehensive analyses highlight the distinctions between AV-AG and AVS, the benefits of end-to-end modeling, and the contribution of each component. Code and dataset have been released on https://jscslld.github.io/AVAGFormer/.
Similar Papers
Audio-Guided Visual Perception for Audio-Visual Navigation
Sound
Helps robots find sounds in new places.
Audio-Visual World Models: Towards Multisensory Imagination in Sight and Sound
Multimedia
Robots learn to see and hear to navigate better.
R-AVST: Empowering Video-LLMs with Fine-Grained Spatio-Temporal Reasoning in Complex Audio-Visual Scenarios
CV and Pattern Recognition
Helps computers understand videos with sound and movement.