PF3Det: A Prompted Foundation Feature Assisted Visual LiDAR 3D Detector
By: Kaidong Li , Tianxiao Zhang , Kuan-Chuan Peng and more
Potential Business Impact:
Helps self-driving cars see better with less data.
3D object detection is crucial for autonomous driving, leveraging both LiDAR point clouds for precise depth information and camera images for rich semantic information. Therefore, the multi-modal methods that combine both modalities offer more robust detection results. However, efficiently fusing LiDAR points and images remains challenging due to the domain gaps. In addition, the performance of many models is limited by the amount of high quality labeled data, which is expensive to create. The recent advances in foundation models, which use large-scale pre-training on different modalities, enable better multi-modal fusion. Combining the prompt engineering techniques for efficient training, we propose the Prompted Foundational 3D Detector (PF3Det), which integrates foundation model encoders and soft prompts to enhance LiDAR-camera feature fusion. PF3Det achieves the state-of-the-art results under limited training data, improving NDS by 1.19% and mAP by 2.42% on the nuScenes dataset, demonstrating its efficiency in 3D detection.
Similar Papers
Intrinsic-feature-guided 3D Object Detection
CV and Pattern Recognition
Helps self-driving cars see better in bad weather.
A Multimodal Hybrid Late-Cascade Fusion Network for Enhanced 3D Object Detection
CV and Pattern Recognition
Helps cars see people and bikes better.
Detect Anything 3D in the Wild
CV and Pattern Recognition
Finds new objects in 3D from one camera.