Broadening View Synthesis of Dynamic Scenes from Constrained Monocular Videos
By: Le Jiang , Shaotong Zhu , Yedi Luo and more
Potential Business Impact:
Makes 3D videos look real from any angle.
In dynamic Neural Radiance Fields (NeRF) systems, state-of-the-art novel view synthesis methods often fail under significant viewpoint deviations, producing unstable and unrealistic renderings. To address this, we introduce Expanded Dynamic NeRF (ExpanDyNeRF), a monocular NeRF framework that leverages Gaussian splatting priors and a pseudo-ground-truth generation strategy to enable realistic synthesis under large-angle rotations. ExpanDyNeRF optimizes density and color features to improve scene reconstruction from challenging perspectives. We also present the Synthetic Dynamic Multiview (SynDM) dataset, the first synthetic multiview dataset for dynamic scenes with explicit side-view supervision-created using a custom GTA V-based rendering pipeline. Quantitative and qualitative results on SynDM and real-world datasets demonstrate that ExpanDyNeRF significantly outperforms existing dynamic NeRF methods in rendering fidelity under extreme viewpoint shifts. Further details are provided in the supplementary materials.
Similar Papers
VDNeRF: Vision-only Dynamic Neural Radiance Field for Urban Scenes
CV and Pattern Recognition
Makes robots see moving things and know where they are.
4D3R: Motion-Aware Neural Reconstruction and Rendering of Dynamic Scenes from Monocular Videos
CV and Pattern Recognition
Creates realistic 3D videos from regular videos.
Segmentation-Guided Neural Radiance Fields for Novel Street View Synthesis
CV and Pattern Recognition
Creates realistic 3D views of outdoor scenes.