G3Splat: Geometrically Consistent Generalizable Gaussian Splatting
By: Mehdi Hosseinzadeh , Shin-Fang Chng , Yi Xu and more
3D Gaussians have recently emerged as an effective scene representation for real-time splatting and accurate novel-view synthesis, motivating several works to adapt multi-view structure prediction networks to regress per-pixel 3D Gaussians from images. However, most prior work extends these networks to predict additional Gaussian parameters -- orientation, scale, opacity, and appearance -- while relying almost exclusively on view-synthesis supervision. We show that a view-synthesis loss alone is insufficient to recover geometrically meaningful splats in this setting. We analyze and address the ambiguities of learning 3D Gaussian splats under self-supervision for pose-free generalizable splatting, and introduce G3Splat, which enforces geometric priors to obtain geometrically consistent 3D scene representations. Trained on RE10K, our approach achieves state-of-the-art performance in (i) geometrically consistent reconstruction, (ii) relative pose estimation, and (iii) novel-view synthesis. We further demonstrate strong zero-shot generalization on ScanNet, substantially outperforming prior work in both geometry recovery and relative pose estimation. Code and pretrained models are released on our project page (https://m80hz.github.io/g3splat/).
Similar Papers
G4Splat: Geometry-Guided Gaussian Splatting with Generative Prior
CV and Pattern Recognition
Makes 3D pictures from few photos.
Geometry-Consistent 4D Gaussian Splatting for Sparse-Input Dynamic View Synthesis
CV and Pattern Recognition
Creates realistic 3D scenes from few pictures.
GeoSplat: A Deep Dive into Geometry-Constrained Gaussian Splatting
CV and Pattern Recognition
Makes 3D pictures look more real and detailed.