Auto-Vocabulary 3D Object Detection
By: Haomeng Zhang , Kuan-Chuan Peng , Suhas Lohit and more
Open-vocabulary 3D object detection methods are able to localize 3D boxes of classes unseen during training. Despite the name, existing methods rely on user-specified classes both at training and inference. We propose to study Auto-Vocabulary 3D Object Detection (AV3DOD), where the classes are automatically generated for the detected objects without any user input. To this end, we introduce Semantic Score (SS) to evaluate the quality of the generated class names. We then develop a novel framework, AV3DOD, which leverages 2D vision-language models (VLMs) to generate rich semantic candidates through image captioning, pseudo 3D box generation, and feature-space semantics expansion. AV3DOD achieves the state-of-the-art (SOTA) performance on both localization (mAP) and semantic quality (SS) on the ScanNetV2 and SUNRGB-D datasets. Notably, it surpasses the SOTA, CoDA, by 3.48 overall mAP and attains a 24.5% relative improvement in SS on ScanNetV2.
Similar Papers
OpenM3D: Open Vocabulary Multi-view Indoor 3D Object Detection without Human Annotations
CV and Pattern Recognition
Finds objects in 3D rooms without human labels.
HQ-OV3D: A High Box Quality Open-World 3D Detection Framework based on Diffision Model
CV and Pattern Recognition
Helps self-driving cars see and identify objects better.
HQ-OV3D: A High Box Quality Open-World 3D Detection Framework based on Diffision Model
CV and Pattern Recognition
Helps self-driving cars see new objects better.