LDRFusion: A LiDAR-Dominant multimodal refinement framework for 3D object detection
By: Jijun Wang , Yan Wu , Yujian Mo and more
Potential Business Impact:
Helps self-driving cars see better in bad weather.
Existing LiDAR-Camera fusion methods have achieved strong results in 3D object detection. To address the sparsity of point clouds, previous approaches typically construct spatial pseudo point clouds via depth completion as auxiliary input and adopts a proposal-refinement framework to generate detection results. However, introducing pseudo points inevitably brings noise, potentially resulting in inaccurate predictions. Considering the differing roles and reliability levels of each modality, we propose LDRFusion, a novel Lidar-dominant two-stage refinement framework for multi-sensor fusion. The first stage soley relies on LiDAR to produce accurately localized proposals, followed by a second stage where pseudo point clouds are incorporated to detect challenging instances. The instance-level results from both stages are subsequently merged. To further enhance the representation of local structures in pseudo point clouds, we present a hierarchical pseudo point residual encoding module, which encodes neighborhood sets using both feature and positional residuals. Experiments on the KITTI dataset demonstrate that our framework consistently achieves strong performance across multiple categories and difficulty levels.
Similar Papers
LiteFusion: Taming 3D Object Detectors from Vision-Based to Multi-Modal with Minimal Adaptation
CV and Pattern Recognition
Helps self-driving cars see better, even without lasers.
A Multimodal Hybrid Late-Cascade Fusion Network for Enhanced 3D Object Detection
CV and Pattern Recognition
Helps cars see people and bikes better.
MLF-4DRCNet: Multi-Level Fusion with 4D Radar and Camera for 3D Object Detection in Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars see better with radar.