EEPNet-V2: Patch-to-Pixel Solution for Efficient Cross-Modal Registration between LiDAR Point Cloud and Camera Image
By: Yuanchao Yue , Hui Yuan , Zhengxin Li and more
Potential Business Impact:
Aligns car sensors faster for better driving.
The primary requirement for cross-modal data fusion is the precise alignment of data from different sensors. However, the calibration between LiDAR point clouds and camera images is typically time-consuming and needs external calibration board or specific environmental features. Cross-modal registration effectively solves this problem by aligning the data directly without requiring external calibration. However, due to the domain gap between the point cloud and the image, existing methods rarely achieve satisfactory registration accuracy while maintaining real-time performance. To address this issue, we propose a framework that projects point clouds into several 2D representations for matching with camera images, which not only leverages the geometric characteristic of LiDAR point clouds effectively but also bridge the domain gap between the point cloud and image. Moreover, to tackle the challenges of cross modal differences and the limited overlap between LiDAR point clouds and images in the image matching task, we introduce a multi-scale feature extraction network to effectively extract features from both camera images and the projection maps of LiDAR point cloud. Additionally, we propose a patch-to-pixel matching network to provide more effective supervision and achieve high accuracy. We validate the performance of our model through experiments on the KITTI and nuScenes datasets. Experimental results demonstrate the the proposed method achieves real-time performance and extremely high registration accuracy. Specifically, on the KITTI dataset, our model achieves a registration accuracy rate of over 99\%. Our code is released at: https://github.com/ESRSchao/EEPNet-V2.
Similar Papers
EdgeRegNet: Edge Feature-based Multimodal Registration Network between Images and LiDAR Point Clouds
CV and Pattern Recognition
Helps cars see better using cameras and lasers.
Self-Supervised Cross-Modal Learning for Image-to-Point Cloud Registration
CV and Pattern Recognition
Helps cars see the world in 3D.
Multimodal Point Cloud Semantic Segmentation With Virtual Point Enhancement
CV and Pattern Recognition
Makes self-driving cars see small things better.