ST-GS: Vision-Based 3D Semantic Occupancy Prediction with Spatial-Temporal Gaussian Splatting
By: Xiaoyang Yan, Muleilan Pei, Shaojie Shen
Potential Business Impact:
Helps self-driving cars see better over time.
3D occupancy prediction is critical for comprehensive scene understanding in vision-centric autonomous driving. Recent advances have explored utilizing 3D semantic Gaussians to model occupancy while reducing computational overhead, but they remain constrained by insufficient multi-view spatial interaction and limited multi-frame temporal consistency. To overcome these issues, in this paper, we propose a novel Spatial-Temporal Gaussian Splatting (ST-GS) framework to enhance both spatial and temporal modeling in existing Gaussian-based pipelines. Specifically, we develop a guidance-informed spatial aggregation strategy within a dual-mode attention mechanism to strengthen spatial interaction in Gaussian representations. Furthermore, we introduce a geometry-aware temporal fusion scheme that effectively leverages historical context to improve temporal continuity in scene completion. Extensive experiments on the large-scale nuScenes occupancy prediction benchmark showcase that our proposed approach not only achieves state-of-the-art performance but also delivers markedly better temporal consistency compared to existing Gaussian-based methods.
Similar Papers
GSsplat: Generalizable Semantic Gaussian Splatting for Novel-view Synthesis in 3D Scenes
Graphics
Makes 3D scenes understandable from many angles.
GS4: Generalizable Sparse Splatting Semantic SLAM
CV and Pattern Recognition
Builds detailed 3D maps from videos quickly.
Vision-Only Gaussian Splatting for Collaborative Semantic Occupancy Prediction
CV and Pattern Recognition
Cars share what they see to understand surroundings better.