Audio-3DVG: Unified Audio -- Point Cloud Fusion for 3D Visual Grounding
By: Duc Cao-Dinh , Khai Le-Duc , Anh Dao and more
Potential Business Impact:
Finds objects in 3D using spoken words.
3D Visual Grounding (3DVG) involves localizing target objects in 3D point clouds based on natural language. While prior work has made strides using textual descriptions, leveraging spoken language-known as Audio-based 3D Visual Grounding-remains underexplored and challenging. Motivated by advances in automatic speech recognition (ASR) and speech representation learning, we propose Audio-3DVG, a simple yet effective framework that integrates audio and spatial information for enhanced grounding. Rather than treating speech as a monolithic input, we decompose the task into two complementary components. First, we introduce (i) Object Mention Detection, a multi-label classification task that explicitly identifies which objects are referred to in the audio, enabling more structured audio-scene reasoning. Second, we propose an (ii) Audio-Guided Attention module that models the interactions between target candidates and mentioned objects, enhancing discrimination in cluttered 3D environments. To support benchmarking, we (iii) synthesize audio descriptions for standard 3DVG datasets, including ScanRefer, Sr3D, and Nr3D. Experimental results demonstrate that Audio-3DVG not only achieves new state-of-the-art performance in audio-based grounding, but also competes with text-based methods, highlight the promise of integrating spoken language into 3D vision tasks.
Similar Papers
Zero-Shot 3D Visual Grounding from Vision-Language Models
CV and Pattern Recognition
Finds objects in 3D using words, no special training.
Unified Representation Space for 3D Visual Grounding
CV and Pattern Recognition
Helps computers find objects in 3D using words.
I Speak and You Find: Robust 3D Visual Grounding with Noisy and Ambiguous Speech Inputs
CV and Pattern Recognition
Helps computers find things using messy speech.