No Pose at All: Self-Supervised Pose-Free 3D Gaussian Splatting from Sparse Views
By: Ranran Huang, Krystian Mikolajczyk
Potential Business Impact:
Creates 3D scenes from just a few photos.
We introduce SPFSplat, an efficient framework for 3D Gaussian splatting from sparse multi-view images, requiring no ground-truth poses during training or inference. It employs a shared feature extraction backbone, enabling simultaneous prediction of 3D Gaussian primitives and camera poses in a canonical space from unposed inputs within a single feed-forward step. Alongside the rendering loss based on estimated novel-view poses, a reprojection loss is integrated to enforce the learning of pixel-aligned Gaussian primitives for enhanced geometric constraints. This pose-free training paradigm and efficient one-step feed-forward design make SPFSplat well-suited for practical applications. Remarkably, despite the absence of pose supervision, SPFSplat achieves state-of-the-art performance in novel view synthesis even under significant viewpoint changes and limited image overlap. It also surpasses recent methods trained with geometry priors in relative pose estimation. Code and trained models are available on our project page: https://ranrhuang.github.io/spfsplat/.
Similar Papers
SPFSplatV2: Efficient Self-Supervised Pose-Free 3D Gaussian Splatting from Sparse Views
CV and Pattern Recognition
Creates 3D scenes from just a few pictures.
MuSASplat: Efficient Sparse-View 3D Gaussian Splats via Lightweight Multi-Scale Adaptation
CV and Pattern Recognition
Makes 3D pictures from few photos faster.
FSFSplatter: Build Surface and Novel Views with Sparse-Views within 2min
CV and Pattern Recognition
Creates 3D scenes from just a few pictures.