Unified Representation Space for 3D Visual Grounding
By: Yinuo Zheng , Lipeng Gu , Honghua Chen and more
Potential Business Impact:
Helps computers find objects in 3D using words.
3D visual grounding (3DVG) is a critical task in scene understanding that aims to identify objects in 3D scenes based on text descriptions. However, existing methods rely on separately pre-trained vision and text encoders, resulting in a significant gap between the two modalities in terms of spatial geometry and semantic categories. This discrepancy often causes errors in object positioning and classification. The paper proposes UniSpace-3D, which innovatively introduces a unified representation space for 3DVG, effectively bridging the gap between visual and textual features. Specifically, UniSpace-3D incorporates three innovative designs: i) a unified representation encoder that leverages the pre-trained CLIP model to map visual and textual features into a unified representation space, effectively bridging the gap between the two modalities; ii) a multi-modal contrastive learning module that further reduces the modality gap; iii) a language-guided query selection module that utilizes the positional and semantic information to identify object candidate points aligned with textual descriptions. Extensive experiments demonstrate that UniSpace-3D outperforms baseline models by at least 2.24% on the ScanRefer and Nr3D/Sr3D datasets. The code will be made available upon acceptance of the paper.
Similar Papers
Zero-Shot 3D Visual Grounding from Vision-Language Models
CV and Pattern Recognition
Finds objects in 3D using words, no special training.
Audio-3DVG: Unified Audio -- Point Cloud Fusion for 3D Visual Grounding
Machine Learning (CS)
Finds objects in 3D using spoken words.
Unifying 2D and 3D Vision-Language Understanding
CV and Pattern Recognition
Lets computers understand 3D objects from pictures.