OpenBox: Annotate Any Bounding Boxes in 3D
By: In-Jae Lee , Mungyeom Kim , Kwonyoung Ryu and more
Potential Business Impact:
Teaches cars to see and understand objects.
Unsupervised and open-vocabulary 3D object detection has recently gained attention, particularly in autonomous driving, where reducing annotation costs and recognizing unseen objects are critical for both safety and scalability. However, most existing approaches uniformly annotate 3D bounding boxes, ignore objects' physical states, and require multiple self-training iterations for annotation refinement, resulting in suboptimal quality and substantial computational overhead. To address these challenges, we propose OpenBox, a two-stage automatic annotation pipeline that leverages a 2D vision foundation model. In the first stage, OpenBox associates instance-level cues from 2D images processed by a vision foundation model with the corresponding 3D point clouds via cross-modal instance alignment. In the second stage, it categorizes instances by rigidity and motion state, then generates adaptive bounding boxes with class-specific size statistics. As a result, OpenBox produces high-quality 3D bounding box annotations without requiring self-training. Experiments on the Waymo Open Dataset, the Lyft Level 5 Perception dataset, and the nuScenes dataset demonstrate improved accuracy and efficiency over baselines.
Similar Papers
HQ-OV3D: A High Box Quality Open-World 3D Detection Framework based on Diffision Model
CV and Pattern Recognition
Helps self-driving cars see and identify objects better.
HQ-OV3D: A High Box Quality Open-World 3D Detection Framework based on Diffision Model
CV and Pattern Recognition
Helps self-driving cars see new objects better.
BoxFusion: Reconstruction-Free Open-Vocabulary 3D Object Detection via Real-Time Multi-View Box Fusion
CV and Pattern Recognition
Helps cars see and understand objects in 3D.