Score: 1

CA-W3D: Leveraging Context-Aware Knowledge for Weakly Supervised Monocular 3D Detection

Published: March 6, 2025 | arXiv ID: 2503.04154v2

By: Chupeng Liu, Runkai Zhao, Weidong Cai

Potential Business Impact:

Helps cars see in 3D with less training.

Business Areas:
Image Recognition Data and Analytics, Software

Weakly supervised monocular 3D detection, while less annotation-intensive, often struggles to capture the global context required for reliable 3D reasoning. Conventional label-efficient methods focus on object-centric features, neglecting contextual semantic relationships that are critical in complex scenes. In this work, we propose a Context-Aware Weak Supervision for Monocular 3D object detection, namely CA-W3D, to address this limitation in a two-stage training paradigm. Specifically, we first introduce a pre-training stage employing Region-wise Object Contrastive Matching (ROCM), which aligns regional object embeddings derived from a trainable monocular 3D encoder and a frozen open-vocabulary 2D visual grounding model. This alignment encourages the monocular encoder to discriminate scene-specific attributes and acquire richer contextual knowledge. In the second stage, we incorporate a pseudo-label training process with a Dual-to-One Distillation (D2OD) mechanism, which effectively transfers contextual priors into the monocular encoder while preserving spatial fidelity and maintaining computational efficiency during inference. Extensive experiments conducted on the public KITTI benchmark demonstrate the effectiveness of our approach, surpassing the SoTA method over all metrics, highlighting the importance of contextual-aware knowledge in weakly-supervised monocular 3D detection.

Country of Origin
🇦🇺 Australia

Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition