CuSfM: CUDA-Accelerated Structure-from-Motion
By: Jingrui Yu , Jun Liu , Kefei Ren and more
Potential Business Impact:
Helps robots and self-driving cars see better.
Efficient and accurate camera pose estimation forms the foundational requirement for dense reconstruction in autonomous navigation, robotic perception, and virtual simulation systems. This paper addresses the challenge via cuSfM, a CUDA-accelerated offline Structure-from-Motion system that leverages GPU parallelization to efficiently employ computationally intensive yet highly accurate feature extractors, generating comprehensive and non-redundant data associations for precise camera pose estimation and globally consistent mapping. The system supports pose optimization, mapping, prior-map localization, and extrinsic refinement. It is designed for offline processing, where computational resources can be fully utilized to maximize accuracy. Experimental results demonstrate that cuSfM achieves significantly improved accuracy and processing speed compared to the widely used COLMAP method across various testing scenarios, while maintaining the high precision and global consistency essential for offline SfM applications. The system is released as an open-source Python wrapper implementation, PyCuSfM, available at https://github.com/nvidia-isaac/pyCuSFM, to facilitate research and applications in computer vision and robotics.
Similar Papers
InstantSfM: Fully Sparse and Parallel Structure-from-Motion
CV and Pattern Recognition
Makes 3D maps from pictures much faster.
MRASfM: Multi-Camera Reconstruction and Aggregation through Structure-from-Motion in Driving Scenes
CV and Pattern Recognition
Makes self-driving cars see roads better.
CVD-SfM: A Cross-View Deep Front-end Structure-from-Motion System for Sparse Localization in Multi-Altitude Scenes
CV and Pattern Recognition
Helps robots find their way from the sky.