Complementary Information Guided Occupancy Prediction via Multi-Level Representation Fusion
By: Rongtao Xu , Jinzhou Lin , Jialei Zhou and more
Potential Business Impact:
Helps self-driving cars see the world better.
Camera-based occupancy prediction is a mainstream approach for 3D perception in autonomous driving, aiming to infer complete 3D scene geometry and semantics from 2D images. Almost existing methods focus on improving performance through structural modifications, such as lightweight backbones and complex cascaded frameworks, with good yet limited performance. Few studies explore from the perspective of representation fusion, leaving the rich diversity of features in 2D images underutilized. Motivated by this, we propose \textbf{CIGOcc, a two-stage occupancy prediction framework based on multi-level representation fusion. \textbf{CIGOcc extracts segmentation, graphics, and depth features from an input image and introduces a deformable multi-level fusion mechanism to fuse these three multi-level features. Additionally, CIGOcc incorporates knowledge distilled from SAM to further enhance prediction accuracy. Without increasing training costs, CIGOcc achieves state-of-the-art performance on the SemanticKITTI benchmark. The code is provided in the supplementary material and will be released https://github.com/VitaLemonTea1/CIGOcc
Similar Papers
MCOP: Multi-UAV Collaborative Occupancy Prediction
CV and Pattern Recognition
Drones see better together, even hidden things.
MCOP: Multi-UAV Collaborative Occupancy Prediction
CV and Pattern Recognition
Drones share what they see to avoid crashing.
Occupancy Learning with Spatiotemporal Memory
CV and Pattern Recognition
Helps self-driving cars see better over time.