VLScene: Vision-Language Guidance Distillation for Camera-Based 3D Semantic Scene Completion
By: Meng Wang , Huilong Pi , Ruihui Li and more
Potential Business Impact:
Helps self-driving cars see better in 3D.
Camera-based 3D semantic scene completion (SSC) provides dense geometric and semantic perception for autonomous driving. However, images provide limited information making the model susceptible to geometric ambiguity caused by occlusion and perspective distortion. Existing methods often lack explicit semantic modeling between objects, limiting their perception of 3D semantic context. To address these challenges, we propose a novel method VLScene: Vision-Language Guidance Distillation for Camera-based 3D Semantic Scene Completion. The key insight is to use the vision-language model to introduce high-level semantic priors to provide the object spatial context required for 3D scene understanding. Specifically, we design a vision-language guidance distillation process to enhance image features, which can effectively capture semantic knowledge from the surrounding environment and improve spatial context reasoning. In addition, we introduce a geometric-semantic sparse awareness mechanism to propagate geometric structures in the neighborhood and enhance semantic information through contextual sparse interactions. Experimental results demonstrate that VLScene achieves rank-1st performance on challenging benchmarks--SemanticKITTI and SSCBench-KITTI-360, yielding remarkably mIoU scores of 17.52 and 19.10, respectively.
Similar Papers
L2COcc: Lightweight Camera-Centric Semantic Scene Completion via Distillation of LiDAR Model
CV and Pattern Recognition
Makes self-driving cars see 3D better, faster.
Vision-based 3D Semantic Scene Completion via Capture Dynamic Representations
CV and Pattern Recognition
Helps self-driving cars see 3D worlds better.
Unleashing Semantic and Geometric Priors for 3D Scene Completion
CV and Pattern Recognition
Helps cars understand 3D world for safer driving.