SemanticSplat: Feed-Forward 3D Scene Understanding with Language-Aware Gaussian Fields
By: Qijing Li , Jingxiang Sun , Liang An and more
Potential Business Impact:
Builds 3D worlds from a few pictures.
Holistic 3D scene understanding, which jointly models geometry, appearance, and semantics, is crucial for applications like augmented reality and robotic interaction. Existing feed-forward 3D scene understanding methods (e.g., LSM) are limited to extracting language-based semantics from scenes, failing to achieve holistic scene comprehension. Additionally, they suffer from low-quality geometry reconstruction and noisy artifacts. In contrast, per-scene optimization methods rely on dense input views, which reduces practicality and increases complexity during deployment. In this paper, we propose SemanticSplat, a feed-forward semantic-aware 3D reconstruction method, which unifies 3D Gaussians with latent semantic attributes for joint geometry-appearance-semantics modeling. To predict the semantic anisotropic Gaussians, SemanticSplat fuses diverse feature fields (e.g., LSeg, SAM) with a cost volume representation that stores cross-view feature similarities, enhancing coherent and accurate scene comprehension. Leveraging a two-stage distillation framework, SemanticSplat reconstructs a holistic multi-modal semantic feature field from sparse-view images. Experiments demonstrate the effectiveness of our method for 3D scene understanding tasks like promptable and open-vocabulary segmentation. Video results are available at https://semanticsplat.github.io.
Similar Papers
GSFF-SLAM: 3D Semantic Gaussian Splatting SLAM via Feature Field
Robotics
Helps robots understand and build 3D worlds better.
UniForward: Unified 3D Scene and Semantic Field Reconstruction via Feed-Forward Gaussian Splatting from Only Sparse-View Images
CV and Pattern Recognition
Makes 3D pictures understand what's in them.
SceneSplat: Gaussian Splatting-based Scene Understanding with Vision-Language Pretraining
CV and Pattern Recognition
Teaches computers to understand 3D spaces from scans.