SN-LiDAR: Semantic Neural Fields for Novel Space-time View LiDAR Synthesis
By: Yi Chen , Tianchen Deng , Wentao Zhao and more
Potential Business Impact:
Makes self-driving cars see around corners.
Recent research has begun exploring novel view synthesis (NVS) for LiDAR point clouds, aiming to generate realistic LiDAR scans from unseen viewpoints. However, most existing approaches do not reconstruct semantic labels, which are crucial for many downstream applications such as autonomous driving and robotic perception. Unlike images, which benefit from powerful segmentation models, LiDAR point clouds lack such large-scale pre-trained models, making semantic annotation time-consuming and labor-intensive. To address this challenge, we propose SN-LiDAR, a method that jointly performs accurate semantic segmentation, high-quality geometric reconstruction, and realistic LiDAR synthesis. Specifically, we employ a coarse-to-fine planar-grid feature representation to extract global features from multi-frame point clouds and leverage a CNN-based encoder to extract local semantic features from the current frame point cloud. Extensive experiments on SemanticKITTI and KITTI-360 demonstrate the superiority of SN-LiDAR in both semantic and geometric reconstruction, effectively handling dynamic objects and large-scale scenes. Codes will be available on https://github.com/dtc111111/SN-Lidar.
Similar Papers
Real Time Semantic Segmentation of High Resolution Automotive LiDAR Scans
Robotics
Helps self-driving cars see better in real-time.
Leveraging Semantic Graphs for Efficient and Robust LiDAR SLAM
Robotics
Helps robots understand where they are and what's around.
Semantic Segmentation Algorithm Based on Light Field and LiDAR Fusion
CV and Pattern Recognition
Helps self-driving cars see through obstacles.