StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation
By: Ke Xing , Longfei Li , Yuyang Yin and more
The growing adoption of XR devices has fueled strong demand for high-quality stereo video, yet its production remains costly and artifact-prone. To address this challenge, we present StereoWorld, an end-to-end framework that repurposes a pretrained video generator for high-fidelity monocular-to-stereo video generation. Our framework jointly conditions the model on the monocular video input while explicitly supervising the generation with a geometry-aware regularization to ensure 3D structural fidelity. A spatio-temporal tiling scheme is further integrated to enable efficient, high-resolution synthesis. To enable large-scale training and evaluation, we curate a high-definition stereo video dataset containing over 11M frames aligned to natural human interpupillary distance (IPD). Extensive experiments demonstrate that StereoWorld substantially outperforms prior methods, generating stereo videos with superior visual fidelity and geometric consistency. The project webpage is available at https://ke-xing.github.io/StereoWorld/.
Similar Papers
GeoWorld: Unlocking the Potential of Geometry Models to Facilitate High-Fidelity 3D Scene Generation
CV and Pattern Recognition
Creates realistic 3D worlds from pictures.
S^2VG: 3D Stereoscopic and Spatial Video Generation via Denoising Frame Matrix
CV and Pattern Recognition
Makes normal videos feel 3D and real.
IC-World: In-Context Generation for Shared World Modeling
CV and Pattern Recognition
Creates consistent 3D worlds from many pictures.