Hierarchical Cross-Modal Alignment for Open-Vocabulary 3D Object Detection
By: Youjun Zhao, Jiaying Lin, Rynson W. H. Lau
Potential Business Impact:
Lets computers find any object in 3D space.
Open-vocabulary 3D object detection (OV-3DOD) aims at localizing and classifying novel objects beyond closed sets. The recent success of vision-language models (VLMs) has demonstrated their remarkable capabilities to understand open vocabularies. Existing works that leverage VLMs for 3D object detection (3DOD) generally resort to representations that lose the rich scene context required for 3D perception. To address this problem, we propose in this paper a hierarchical framework, named HCMA, to simultaneously learn local object and global scene information for OV-3DOD. Specifically, we first design a Hierarchical Data Integration (HDI) approach to obtain coarse-to-fine 3D-image-text data, which is fed into a VLM to extract object-centric knowledge. To facilitate the association of feature hierarchies, we then propose an Interactive Cross-Modal Alignment (ICMA) strategy to establish effective intra-level and inter-level feature connections. To better align features across different levels, we further propose an Object-Focusing Context Adjustment (OFCA) module to refine multi-level features by emphasizing object-related features. Extensive experiments demonstrate that the proposed method outperforms SOTA methods on the existing OV-3DOD benchmarks. It also achieves promising OV-3DOD results even without any 3D annotations.
Similar Papers
A Hierarchical Semantic Distillation Framework for Open-Vocabulary Object Detection
CV and Pattern Recognition
Teaches computers to find any object, even new ones.
HQ-OV3D: A High Box Quality Open-World 3D Detection Framework based on Diffision Model
CV and Pattern Recognition
Helps self-driving cars see and identify objects better.
HQ-OV3D: A High Box Quality Open-World 3D Detection Framework based on Diffision Model
CV and Pattern Recognition
Helps self-driving cars see new objects better.