Enhancing LiDAR Point Features with Foundation Model Priors for 3D Object Detection
By: Yujian Mo , Yan Wu , Junqiao Zhao and more
Potential Business Impact:
Helps self-driving cars see better with cameras.
Recent advances in foundation models have opened up new possibilities for enhancing 3D perception. In particular, DepthAnything offers dense and reliable geometric priors from monocular RGB images, which can complement sparse LiDAR data in autonomous driving scenarios. However, such priors remain underutilized in LiDAR-based 3D object detection. In this paper, we address the limited expressiveness of raw LiDAR point features, especially the weak discriminative capability of the reflectance attribute, by introducing depth priors predicted by DepthAnything. These priors are fused with the original LiDAR attributes to enrich each point's representation. To leverage the enhanced point features, we propose a point-wise feature extraction module. Then, a Dual-Path RoI feature extraction framework is employed, comprising a voxel-based branch for global semantic context and a point-based branch for fine-grained structural details. To effectively integrate the complementary RoI features, we introduce a bidirectional gated RoI feature fusion module that balances global and local cues. Extensive experiments on the KITTI benchmark show that our method consistently improves detection accuracy, demonstrating the value of incorporating visual foundation model priors into LiDAR-based 3D object detection.
Similar Papers
Systematic Evaluation of Depth Backbones and Semantic Cues for Monocular Pseudo-LiDAR 3D Detection
CV and Pattern Recognition
Makes cameras see in 3D like eyes.
PF3Det: A Prompted Foundation Feature Assisted Visual LiDAR 3D Detector
CV and Pattern Recognition
Helps self-driving cars see better with less data.
Intrinsic-feature-guided 3D Object Detection
CV and Pattern Recognition
Helps self-driving cars see better in bad weather.