Advances in Feed-Forward 3D Reconstruction and View Synthesis: A Survey
By: Jiahui Zhang , Yuelei Li , Anpei Chen and more
Potential Business Impact:
Makes 3D pictures and videos from photos.
3D reconstruction and view synthesis are foundational problems in computer vision, graphics, and immersive technologies such as augmented reality (AR), virtual reality (VR), and digital twins. Traditional methods rely on computationally intensive iterative optimization in a complex chain, limiting their applicability in real-world scenarios. Recent advances in feed-forward approaches, driven by deep learning, have revolutionized this field by enabling fast and generalizable 3D reconstruction and view synthesis. This survey offers a comprehensive review of feed-forward techniques for 3D reconstruction and view synthesis, with a taxonomy according to the underlying representation architectures including point cloud, 3D Gaussian Splatting (3DGS), Neural Radiance Fields (NeRF), etc. We examine key tasks such as pose-free reconstruction, dynamic 3D reconstruction, and 3D-aware image and video synthesis, highlighting their applications in digital humans, SLAM, robotics, and beyond. In addition, we review commonly used datasets with detailed statistics, along with evaluation protocols for various downstream tasks. We conclude by discussing open research challenges and promising directions for future work, emphasizing the potential of feed-forward approaches to advance the state of the art in 3D vision.
Similar Papers
Review of Feed-forward 3D Reconstruction: From DUSt3R to VGGT
CV and Pattern Recognition
Builds 3D worlds from just pictures.
Sparse-View 3D Reconstruction: Recent Advances and Open Challenges
CV and Pattern Recognition
Makes 3D pictures from few photos.
One-Shot Refiner: Boosting Feed-forward Novel View Synthesis via One-Step Diffusion
CV and Pattern Recognition
Makes blurry pictures sharp and clear.