MeshSplat: Generalizable Sparse-View Surface Reconstruction via Gaussian Splatting
By: Hanzhi Chang , Ruijie Zhu , Wenjie Chang and more
Potential Business Impact:
Creates 3D shapes from few pictures.
Surface reconstruction has been widely studied in computer vision and graphics. However, existing surface reconstruction works struggle to recover accurate scene geometry when the input views are extremely sparse. To address this issue, we propose MeshSplat, a generalizable sparse-view surface reconstruction framework via Gaussian Splatting. Our key idea is to leverage 2DGS as a bridge, which connects novel view synthesis to learned geometric priors and then transfers these priors to achieve surface reconstruction. Specifically, we incorporate a feed-forward network to predict per-view pixel-aligned 2DGS, which enables the network to synthesize novel view images and thus eliminates the need for direct 3D ground-truth supervision. To improve the accuracy of 2DGS position and orientation prediction, we propose a Weighted Chamfer Distance Loss to regularize the depth maps, especially in overlapping areas of input views, and also a normal prediction network to align the orientation of 2DGS with normal vectors predicted by a monocular normal estimator. Extensive experiments validate the effectiveness of our proposed improvement, demonstrating that our method achieves state-of-the-art performance in generalizable sparse-view mesh reconstruction tasks. Project Page: https://hanzhichang.github.io/meshsplat_web
Similar Papers
SparSplat: Fast Multi-View Reconstruction with Generalizable 2D Gaussian Splatting
CV and Pattern Recognition
Makes 3D pictures from few photos, super fast.
Sparse2DGS: Geometry-Prioritized Gaussian Splatting for Surface Reconstruction from Sparse Views
CV and Pattern Recognition
Creates 3D pictures from few photos.
SparseSurf: Sparse-View 3D Gaussian Splatting for Surface Reconstruction
CV and Pattern Recognition
Builds better 3D worlds from fewer pictures.