CRISTAL: Real-time Camera Registration in Static LiDAR Scans using Neural Rendering
By: Joni Vanherck , Steven Moonen , Brent Zoomers and more
Potential Business Impact:
Lets robots know exactly where they are.
Accurate camera localization is crucial for robotics and Extended Reality (XR), enabling reliable navigation and alignment of virtual and real content. Existing visual methods often suffer from drift, scale ambiguity, and depend on fiducials or loop closure. This work introduces a real-time method for localizing a camera within a pre-captured, highly accurate colored LiDAR point cloud. By rendering synthetic views from this cloud, 2D-3D correspondences are established between live frames and the point cloud. A neural rendering technique narrows the domain gap between synthetic and real images, reducing occlusion and background artifacts to improve feature matching. The result is drift-free camera tracking with correct metric scale in the global LiDAR coordinate system. Two real-time variants are presented: Online Render and Match, and Prebuild and Localize. We demonstrate improved results on the ScanNet++ dataset and outperform existing SLAM pipelines.
Similar Papers
SimULi: Real-Time LiDAR and Camera Simulation with Unscented Transforms
CV and Pattern Recognition
Makes self-driving cars see better with fake worlds.
SimULi: Real-Time LiDAR and Camera Simulation with Unscented Transforms
CV and Pattern Recognition
Makes self-driving cars test in real-time.
A Low-Latency 3D Live Remote Visualization System for Tourist Sites Integrating Dynamic and Pre-captured Static Point Clouds
Multimedia
Makes outdoor places look real in 3D live.