GaussianFormer3D: Multi-Modal Gaussian-based Semantic Occupancy Prediction with 3D Deformable Attention
By: Lingjun Zhao , Sizhe Wei , James Hays and more
Potential Business Impact:
Helps self-driving cars see better in 3D.
3D semantic occupancy prediction is critical for achieving safe and reliable autonomous driving. Compared to camera-only perception systems, multi-modal pipelines, especially LiDAR-camera fusion methods, can produce more accurate and detailed predictions. Although most existing works utilize a dense grid-based representation, in which the entire 3D space is uniformly divided into discrete voxels, the emergence of 3D Gaussians provides a compact and continuous object-centric representation. In this work, we propose a multi-modal Gaussian-based semantic occupancy prediction framework utilizing 3D deformable attention, named as GaussianFormer3D. We introduce a voxel-to-Gaussian initialization strategy to provide 3D Gaussians with geometry priors from LiDAR data, and design a LiDAR-guided 3D deformable attention mechanism for refining 3D Gaussians with LiDAR-camera fusion features in a lifted 3D space. We conducted extensive experiments on both on-road and off-road datasets, demonstrating that our GaussianFormer3D achieves high prediction accuracy that is comparable to state-of-the-art multi-modal fusion-based methods with reduced memory consumption and improved efficiency.
Similar Papers
TGP: Two-modal occupancy prediction with 3D Gaussian and sparse points for 3D Environment Awareness
CV and Pattern Recognition
Helps cars understand 3D spaces better.
Manboformer: Learning Gaussian Representations via Spatial-temporal Attention Mechanism
CV and Pattern Recognition
Helps self-driving cars see better in 3D.
ODG: Occupancy Prediction Using Dual Gaussians
CV and Pattern Recognition
Helps self-driving cars see the world better.