Retrieving Objects from 3D Scenes with Box-Guided Open-Vocabulary Instance Segmentation
By: Khanh Nguyen , Dasith de Silva Edirimuni , Ghulam Mubashar Hassan and more
Locating and retrieving objects from scene-level point clouds is a challenging problem with broad applications in robotics and augmented reality. This task is commonly formulated as open-vocabulary 3D instance segmentation. Although recent methods demonstrate strong performance, they depend heavily on SAM and CLIP to generate and classify 3D instance masks from images accompanying the point cloud, leading to substantial computational overhead and slow processing that limit their deployment in real-world settings. Open-YOLO 3D alleviates this issue by using a real-time 2D detector to classify class-agnostic masks produced directly from the point cloud by a pretrained 3D segmenter, eliminating the need for SAM and CLIP and significantly reducing inference time. However, Open-YOLO 3D often fails to generalize to object categories that appear infrequently in the 3D training data. In this paper, we propose a method that generates 3D instance masks for novel objects from RGB images guided by a 2D open-vocabulary detector. Our approach inherits the 2D detector's ability to recognize novel objects while maintaining efficient classification, enabling fast and accurate retrieval of rare instances from open-ended text queries. Our code will be made available at https://github.com/ndkhanh360/BoxOVIS.
Similar Papers
OpenTrack3D: Towards Accurate and Generalizable Open-Vocabulary 3D Instance Segmentation
CV and Pattern Recognition
Lets robots understand and find any object.
OpenM3D: Open Vocabulary Multi-view Indoor 3D Object Detection without Human Annotations
CV and Pattern Recognition
Finds objects in 3D rooms without human labels.
Auto-Vocabulary 3D Object Detection
CV and Pattern Recognition
Lets computers find and name objects they've never seen.