Score: 1

Towards Open-Vocabulary Multimodal 3D Object Detection with Attributes

Published: August 22, 2025 | arXiv ID: 2508.16812v1

By: Xinhao Xiang , Kuan-Chuan Peng , Suhas Lohit and more

Potential Business Impact:

Helps cars see and describe new things.

Business Areas:
Image Recognition Data and Analytics, Software

3D object detection plays a crucial role in autonomous systems, yet existing methods are limited by closed-set assumptions and struggle to recognize novel objects and their attributes in real-world scenarios. We propose OVODA, a novel framework enabling both open-vocabulary 3D object and attribute detection with no need to know the novel class anchor size. OVODA uses foundation models to bridge the semantic gap between 3D features and texts while jointly detecting attributes, e.g., spatial relationships, motion states, etc. To facilitate such research direction, we propose OVAD, a new dataset that supplements existing 3D object detection benchmarks with comprehensive attribute annotations. OVODA incorporates several key innovations, including foundation model feature concatenation, prompt tuning strategies, and specialized techniques for attribute detection, including perspective-specified prompts and horizontal flip augmentation. Our results on both the nuScenes and Argoverse 2 datasets show that under the condition of no given anchor sizes of novel classes, OVODA outperforms the state-of-the-art methods in open-vocabulary 3D object detection while successfully recognizing object attributes. Our OVAD dataset is released here: https://doi.org/10.5281/zenodo.16904069 .

Country of Origin
🇺🇸 United States

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition