L2COcc: Lightweight Camera-Centric Semantic Scene Completion via Distillation of LiDAR Model
By: Ruoyu Wang , Yukai Ma , Yi Yao and more
Potential Business Impact:
Makes self-driving cars see 3D better, faster.
Semantic Scene Completion (SSC) constitutes a pivotal element in autonomous driving perception systems, tasked with inferring the 3D semantic occupancy of a scene from sensory data. To improve accuracy, prior research has implemented various computationally demanding and memory-intensive 3D operations, imposing significant computational requirements on the platform during training and testing. This paper proposes L2COcc, a lightweight camera-centric SSC framework that also accommodates LiDAR inputs. With our proposed efficient voxel transformer (EVT) and cross-modal knowledge modules, including feature similarity distillation (FSD), TPV distillation (TPVD) and prediction alignment distillation (PAD), our method substantially reduce computational burden while maintaining high accuracy. The experimental evaluations demonstrate that our proposed method surpasses the current state-of-the-art vision-based SSC methods regarding accuracy on both the SemanticKITTI and SSCBench-KITTI-360 benchmarks, respectively. Additionally, our method is more lightweight, exhibiting a reduction in both memory consumption and inference time by over 23% compared to the current state-of-the-arts method. Code is available at our project page:https://studyingfufu.github.io/L2COcc/.
Similar Papers
VLScene: Vision-Language Guidance Distillation for Camera-Based 3D Semantic Scene Completion
CV and Pattern Recognition
Helps self-driving cars see better in 3D.
Towards 3D Object-Centric Feature Learning for Semantic Scene Completion
CV and Pattern Recognition
Helps self-driving cars see objects better.
MS-Occ: Multi-Stage LiDAR-Camera Fusion for 3D Semantic Occupancy Prediction
CV and Pattern Recognition
Helps self-driving cars see and understand everything.