RayZer: A Self-supervised Large View Synthesis Model
By: Hanwen Jiang , Hao Tan , Peng Wang and more
Potential Business Impact:
Makes computers understand 3D from flat pictures.
We present RayZer, a self-supervised multi-view 3D Vision model trained without any 3D supervision, i.e., camera poses and scene geometry, while exhibiting emerging 3D awareness. Concretely, RayZer takes unposed and uncalibrated images as input, recovers camera parameters, reconstructs a scene representation, and synthesizes novel views. During training, RayZer relies solely on its self-predicted camera poses to render target views, eliminating the need for any ground-truth camera annotations and allowing RayZer to be trained with 2D image supervision. The emerging 3D awareness of RayZer is attributed to two key factors. First, we design a self-supervised framework, which achieves 3D-aware auto-encoding of input images by disentangling camera and scene representations. Second, we design a transformer-based model in which the only 3D prior is the ray structure, connecting camera, pixel, and scene simultaneously. RayZer demonstrates comparable or even superior novel view synthesis performance than ``oracle'' methods that rely on pose annotations in both training and testing. Project: https://hwjiang1510.github.io/RayZer/
Similar Papers
E-RayZer: Self-supervised 3D Reconstruction as Spatial Visual Pre-training
CV and Pattern Recognition
Teaches computers to see in 3D from pictures.
RaySt3R: Predicting Novel Depth Maps for Zero-Shot Object Completion
CV and Pattern Recognition
Makes 3D shapes from one picture.
Rig3R: Rig-Aware Conditioning for Learned 3D Reconstruction
CV and Pattern Recognition
Helps robots understand 3D space better.