OmniView: An All-Seeing Diffusion Model for 3D and 4D View Synthesis
By: Xiang Fan , Sharath Girish , Vivek Ramanujan and more
Potential Business Impact:
Makes videos from any angle, time, or text.
Prior approaches injecting camera control into diffusion models have focused on specific subsets of 4D consistency tasks: novel view synthesis, text-to-video with camera control, image-to-video, amongst others. Therefore, these fragmented approaches are trained on disjoint slices of available 3D/4D data. We introduce OmniView, a unified framework that generalizes across a wide range of 4D consistency tasks. Our method separately represents space, time, and view conditions, enabling flexible combinations of these inputs. For example, OmniView can synthesize novel views from static, dynamic, and multiview inputs, extrapolate trajectories forward and backward in time, and create videos from text or image prompts with full camera control. OmniView is competitive with task-specific models across diverse benchmarks and metrics, improving image quality scores among camera-conditioned diffusion models by up to 33\% in multiview NVS LLFF dataset, 60\% in dynamic NVS Neural 3D Video benchmark, 20\% in static camera control on RE-10K, and reducing camera trajectory errors by 4x in text-conditioned video generation. With strong generalizability in one model, OmniView demonstrates the feasibility of a generalist 4D video model. Project page is available at https://snap-research.github.io/OmniView/
Similar Papers
OmniVDiff: Omni Controllable Video Diffusion for Generation and Understanding
CV and Pattern Recognition
Creates videos from text and understands video details.
Omni-View: Unlocking How Generation Facilitates Understanding in Unified 3D Model based on Multiview images
CV and Pattern Recognition
Builds 3D worlds from many pictures.
3D-Consistent Multi-View Editing by Diffusion Guidance
CV and Pattern Recognition
Makes 3D pictures look right after editing.