FVGen: Accelerating Novel-View Synthesis with Adversarial Video Diffusion Distillation
By: Wenbin Teng , Gonglin Chen , Haiwei Chen and more
Potential Business Impact:
Makes 3D pictures from few photos faster.
Recent progress in 3D reconstruction has enabled realistic 3D models from dense image captures, yet challenges persist with sparse views, often leading to artifacts in unseen areas. Recent works leverage Video Diffusion Models (VDMs) to generate dense observations, filling the gaps when only sparse views are available for 3D reconstruction tasks. A significant limitation of these methods is their slow sampling speed when using VDMs. In this paper, we present FVGen, a novel framework that addresses this challenge by enabling fast novel view synthesis using VDMs in as few as four sampling steps. We propose a novel video diffusion model distillation method that distills a multi-step denoising teacher model into a few-step denoising student model using Generative Adversarial Networks (GANs) and softened reverse KL-divergence minimization. Extensive experiments on real-world datasets show that, compared to previous works, our framework generates the same number of novel views with similar (or even better) visual quality while reducing sampling time by more than 90%. FVGen significantly improves time efficiency for downstream reconstruction tasks, particularly when working with sparse input views (more than 2) where pre-trained VDMs need to be run multiple times to achieve better spatial coverage.
Similar Papers
Generative Neural Video Compression via Video Diffusion Prior
CV and Pattern Recognition
Makes videos look clearer and smoother when compressed.
Towards One-step Causal Video Generation via Adversarial Self-Distillation
CV and Pattern Recognition
Makes videos faster and better with fewer steps.
DT-NVS: Diffusion Transformers for Novel View Synthesis
CV and Pattern Recognition
Creates new pictures of a scene from one photo.