CLAIM: Camera-LiDAR Alignment with Intensity and Monodepth
By: Zhuo Zhang , Yonghui Liu , Meijie Zhang and more
Potential Business Impact:
Aligns car cameras and sensors perfectly.
In this paper, we unleash the potential of the powerful monodepth model in camera-LiDAR calibration and propose CLAIM, a novel method of aligning data from the camera and LiDAR. Given the initial guess and pairs of images and LiDAR point clouds, CLAIM utilizes a coarse-to-fine searching method to find the optimal transformation minimizing a patched Pearson correlation-based structure loss and a mutual information-based texture loss. These two losses serve as good metrics for camera-LiDAR alignment results and require no complicated steps of data processing, feature extraction, or feature matching like most methods, rendering our method simple and adaptive to most scenes. We validate CLAIM on public KITTI, Waymo, and MIAS-LCEC datasets, and the experimental results demonstrate its superior performance compared with the state-of-the-art methods. The code is available at https://github.com/Tompson11/claim.
Similar Papers
3D Human Pose and Shape Estimation from LiDAR Point Clouds: A Review
CV and Pattern Recognition
Helps computers see people in 3D from laser scans.
Boosting LiDAR-Based Localization with Semantic Insight: Camera Projection versus Direct LiDAR Segmentation
Robotics
Helps self-driving cars see better with cameras and lasers.
A Novel Solution for Drone Photogrammetry with Low-overlap Aerial Images using Monocular Depth Estimation
CV and Pattern Recognition
Maps places better with fewer pictures.