Score: 0

From Rays to Projections: Better Inputs for Feed-Forward View Synthesis

Published: January 8, 2026 | arXiv ID: 2601.05116v1

By: Zirui Wu , Zeren Jiang , Martin R. Oswald and more

Potential Business Impact:

Makes new pictures from different viewpoints.

Business Areas:
Computer Vision Hardware, Software

Feed-forward view synthesis models predict a novel view in a single pass with minimal 3D inductive bias. Existing works encode cameras as Plücker ray maps, which tie predictions to the arbitrary world coordinate gauge and make them sensitive to small camera transformations, thereby undermining geometric consistency. In this paper, we ask what inputs best condition a model for robust and consistent view synthesis. We propose projective conditioning, which replaces raw camera parameters with a target-view projective cue that provides a stable 2D input. This reframes the task from a brittle geometric regression problem in ray space to a well-conditioned target-view image-to-image translation problem. Additionally, we introduce a masked autoencoding pretraining strategy tailored to this cue, enabling the use of large-scale uncalibrated data for pretraining. Our method shows improved fidelity and stronger cross-view consistency compared to ray-conditioned baselines on our view-consistency benchmark. It also achieves state-of-the-art quality on standard novel view synthesis benchmarks.

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition