VOIC: Visible-Occluded Decoupling for Monocular 3D Semantic Scene Completion
By: Zaidao Han, Risa Higashita, Jiang Liu
Camera-based 3D Semantic Scene Completion (SSC) is a critical task for autonomous driving and robotic scene understanding. It aims to infer a complete 3D volumetric representation of both semantics and geometry from a single image. Existing methods typically focus on end-to-end 2D-to-3D feature lifting and voxel completion. However, they often overlook the interference between high-confidence visible-region perception and low-confidence occluded-region reasoning caused by single-image input, which can lead to feature dilution and error propagation. To address these challenges, we introduce an offline Visible Region Label Extraction (VRLE) strategy that explicitly separates and extracts voxel-level supervision for visible regions from dense 3D ground truth. This strategy purifies the supervisory space for two complementary sub-tasks: visible-region perception and occluded-region reasoning. Building on this idea, we propose the Visible-Occluded Interactive Completion Network (VOIC), a novel dual-decoder framework that explicitly decouples SSC into visible-region semantic perception and occluded-region scene completion. VOIC first constructs a base 3D voxel representation by fusing image features with depth-derived occupancy. The visible decoder focuses on generating high-fidelity geometric and semantic priors, while the occlusion decoder leverages these priors together with cross-modal interaction to perform coherent global scene reasoning. Extensive experiments on the SemanticKITTI and SSCBench-KITTI360 benchmarks demonstrate that VOIC outperforms existing monocular SSC methods in both geometric completion and semantic segmentation accuracy, achieving state-of-the-art performance.
Similar Papers
Towards 3D Object-Centric Feature Learning for Semantic Scene Completion
CV and Pattern Recognition
Helps self-driving cars see objects better.
VLScene: Vision-Language Guidance Distillation for Camera-Based 3D Semantic Scene Completion
CV and Pattern Recognition
Helps self-driving cars see better in 3D.
L2COcc: Lightweight Camera-Centric Semantic Scene Completion via Distillation of LiDAR Model
CV and Pattern Recognition
Makes self-driving cars see 3D better, faster.