S^2VG: 3D Stereoscopic and Spatial Video Generation via Denoising Frame Matrix
By: Peng Dai , Feitong Tan , Qiangeng Xu and more
Potential Business Impact:
Makes normal videos feel 3D and real.
While video generation models excel at producing high-quality monocular videos, generating 3D stereoscopic and spatial videos for immersive applications remains an underexplored challenge. We present a pose-free and training-free method that leverages an off-the-shelf monocular video generation model to produce immersive 3D videos. Our approach first warps the generated monocular video into pre-defined camera viewpoints using estimated depth information, then applies a novel \textit{frame matrix} inpainting framework. This framework utilizes the original video generation model to synthesize missing content across different viewpoints and timestamps, ensuring spatial and temporal consistency without requiring additional model fine-tuning. Moreover, we develop a \dualupdate~scheme that further improves the quality of video inpainting by alleviating the negative effects propagated from disoccluded areas in the latent space. The resulting multi-view videos are then adapted into stereoscopic pairs or optimized into 4D Gaussians for spatial video synthesis. We validate the efficacy of our proposed method by conducting experiments on videos from various generative models, such as Sora, Lumiere, WALT, and Zeroscope. The experiments demonstrate that our method has a significant improvement over previous methods. Project page at: https://daipengwa.github.io/S-2VG_ProjectPage/
Similar Papers
StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation
CV and Pattern Recognition
Creates realistic 3D videos from normal ones.
StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation
CV and Pattern Recognition
Makes normal videos look like 3D movies.
Eye2Eye: A Simple Approach for Monocular-to-Stereo Video Synthesis
CV and Pattern Recognition
Makes normal videos look like 3D movies.