Selfi: Self Improving Reconstruction Engine via 3D Geometric Feature Alignment
By: Youming Deng , Songyou Peng , Junyi Zhang and more
Novel View Synthesis (NVS) has traditionally relied on models with explicit 3D inductive biases combined with known camera parameters from Structure-from-Motion (SfM) beforehand. Recent vision foundation models like VGGT take an orthogonal approach -- 3D knowledge is gained implicitly through training data and loss objectives, enabling feed-forward prediction of both camera parameters and 3D representations directly from a set of uncalibrated images. While flexible, VGGT features lack explicit multi-view geometric consistency, and we find that improving such 3D feature consistency benefits both NVS and pose estimation tasks. We introduce Selfi, a self-improving 3D reconstruction pipeline via feature alignment, transforming a VGGT backbone into a high-fidelity 3D reconstruction engine by leveraging its own outputs as pseudo-ground-truth. Specifically, we train a lightweight feature adapter using a reprojection-based consistency loss, which distills VGGT outputs into a new geometrically-aligned feature space that captures spatial proximity in 3D. This enables state-of-the-art performance in both NVS and camera pose estimation, demonstrating that feature alignment is a highly beneficial step for downstream 3D reasoning.
Similar Papers
True Self-Supervised Novel View Synthesis is Transferable
CV and Pattern Recognition
Lets computers create new views of a scene.
AR-1-to-3: Single Image to Consistent 3D Object Generation via Next-View Prediction
CV and Pattern Recognition
Creates realistic 3D objects from a single picture.
Emergent Outlier View Rejection in Visual Geometry Grounded Transformers
CV and Pattern Recognition
Makes 3D pictures from photos without bad ones.