HiLoTs: High-Low Temporal Sensitive Representation Learning for Semi-Supervised LiDAR Segmentation in Autonomous Driving
By: R. D. Lin , Pengcheng Weng , Yinqiao Wang and more
Potential Business Impact:
Helps self-driving cars see better using past data.
LiDAR point cloud semantic segmentation plays a crucial role in autonomous driving. In recent years, semi-supervised methods have gained popularity due to their significant reduction in annotation labor and time costs. Current semi-supervised methods typically focus on point cloud spatial distribution or consider short-term temporal representations, e.g., only two adjacent frames, often overlooking the rich long-term temporal properties inherent in autonomous driving scenarios. In driving experience, we observe that nearby objects, such as roads and vehicles, remain stable while driving, whereas distant objects exhibit greater variability in category and shape. This natural phenomenon is also captured by LiDAR, which reflects lower temporal sensitivity for nearby objects and higher sensitivity for distant ones. To leverage these characteristics, we propose HiLoTs, which learns high-temporal sensitivity and low-temporal sensitivity representations from continuous LiDAR frames. These representations are further enhanced and fused using a cross-attention mechanism. Additionally, we employ a teacher-student framework to align the representations learned by the labeled and unlabeled branches, effectively utilizing the large amounts of unlabeled data. Experimental results on the SemanticKITTI and nuScenes datasets demonstrate that our proposed HiLoTs outperforms state-of-the-art semi-supervised methods, and achieves performance close to LiDAR+Camera multimodal approaches. Code is available on https://github.com/rdlin118/HiLoTs
Similar Papers
Real Time Semantic Segmentation of High Resolution Automotive LiDAR Scans
Robotics
Helps self-driving cars see better in real-time.
Guided Model-based LiDAR Super-Resolution for Resource-Efficient Automotive scene Segmentation
CV and Pattern Recognition
Makes cheap car sensors see like expensive ones.
SN-LiDAR: Semantic Neural Fields for Novel Space-time View LiDAR Synthesis
CV and Pattern Recognition
Makes self-driving cars see around corners.