Intrinsic-feature-guided 3D Object Detection
By: Wanjing Zhang, Chenxing Wang
Potential Business Impact:
Helps self-driving cars see better in bad weather.
LiDAR-based 3D object detection is essential for autonomous driving systems. However, LiDAR point clouds may appear to have sparsity, uneven distribution, and incomplete structures, significantly limiting the detection performance. In road driving environments, target objects referring to vehicles, pedestrians and cyclists are well-suited for enhancing representation through the complete template guidance, considering their grid and topological structures. Therefore, this paper presents an intrinsic-feature-guided 3D object detection method based on a template-assisted feature enhancement module, which extracts intrinsic features from relatively generalized templates and provides rich structural information for foreground objects. Furthermore, a proposal-level contrastive learning mechanism is designed to enhance the feature differences between foreground and background objects. The proposed modules can act as plug-and-play components and improve the performance of multiple existing methods. Extensive experiments illustrate that the proposed method achieves the highly competitive detection results. Code will be available at https://github.com/zhangwanjingjj/IfgNet.git.
Similar Papers
Vision-based Lifting of 2D Object Detections for Automated Driving
CV and Pattern Recognition
Cars see in 3D using only cameras.
PF3Det: A Prompted Foundation Feature Assisted Visual LiDAR 3D Detector
CV and Pattern Recognition
Helps self-driving cars see better with less data.
InsFusion: Rethink Instance-level LiDAR-Camera Fusion for 3D Object Detection
CV and Pattern Recognition
Helps self-driving cars see better in 3D.