Score: 3

DT-NVS: Diffusion Transformers for Novel View Synthesis

Published: November 11, 2025 | arXiv ID: 2511.08823v1

By: Wonbong Jang, Jonathan Tremblay, Lourdes Agapito

BigTech Affiliations: NVIDIA

Potential Business Impact:

Creates new pictures of a scene from one photo.

Business Areas:
Image Recognition Data and Analytics, Software

Generating novel views of a natural scene, e.g., every-day scenes both indoors and outdoors, from a single view is an under-explored problem, even though it is an organic extension to the object-centric novel view synthesis. Existing diffusion-based approaches focus rather on small camera movements in real scenes or only consider unnatural object-centric scenes, limiting their potential applications in real-world settings. In this paper we move away from these constrained regimes and propose a 3D diffusion model trained with image-only losses on a large-scale dataset of real-world, multi-category, unaligned, and casually acquired videos of everyday scenes. We propose DT-NVS, a 3D-aware diffusion model for generalized novel view synthesis that exploits a transformer-based architecture backbone. We make significant contributions to transformer and self-attention architectures to translate images to 3d representations, and novel camera conditioning strategies to allow training on real-world unaligned datasets. In addition, we introduce a novel training paradigm swapping the role of reference frame between the conditioning image and the sampled noisy input. We evaluate our approach on the 3D task of generalized novel view synthesis from a single input image and show improvements over state-of-the-art 3D aware diffusion models and deterministic approaches, while generating diverse outputs.

Country of Origin
πŸ‡¬πŸ‡§ πŸ‡ΊπŸ‡Έ United States, United Kingdom

Page Count
14 pages

Category
Computer Science:
CV and Pattern Recognition