From Rays to Projections: Better Inputs for Feed-Forward View Synthesis
By: Zirui Wu , Zeren Jiang , Martin R. Oswald and more
Potential Business Impact:
Makes new pictures from different viewpoints.
Feed-forward view synthesis models predict a novel view in a single pass with minimal 3D inductive bias. Existing works encode cameras as Plücker ray maps, which tie predictions to the arbitrary world coordinate gauge and make them sensitive to small camera transformations, thereby undermining geometric consistency. In this paper, we ask what inputs best condition a model for robust and consistent view synthesis. We propose projective conditioning, which replaces raw camera parameters with a target-view projective cue that provides a stable 2D input. This reframes the task from a brittle geometric regression problem in ray space to a well-conditioned target-view image-to-image translation problem. Additionally, we introduce a masked autoencoding pretraining strategy tailored to this cue, enabling the use of large-scale uncalibrated data for pretraining. Our method shows improved fidelity and stronger cross-view consistency compared to ray-conditioned baselines on our view-consistency benchmark. It also achieves state-of-the-art quality on standard novel view synthesis benchmarks.
Similar Papers
Pointmap-Conditioned Diffusion for Consistent Novel View Synthesis
CV and Pattern Recognition
Creates new views of driving scenes from few pictures.
MVInverse: Feed-forward Multi-view Inverse Rendering in Seconds
CV and Pattern Recognition
Makes 3D scenes look real from many pictures.
CloseUpShot: Close-up Novel View Synthesis from Sparse-views via Point-conditioned Diffusion Model
CV and Pattern Recognition
Creates detailed 3D views from few pictures.