StereoSpace: Depth-Free Synthesis of Stereo Geometry via End-to-End Diffusion in a Canonical Space
By: Tjark Behrens , Anton Obukhov , Bingxin Ke and more
Potential Business Impact:
Makes one camera see in 3D like two.
We introduce StereoSpace, a diffusion-based framework for monocular-to-stereo synthesis that models geometry purely through viewpoint conditioning, without explicit depth or warping. A canonical rectified space and the conditioning guide the generator to infer correspondences and fill disocclusions end-to-end. To ensure fair and leakage-free evaluation, we introduce an end-to-end protocol that excludes any ground truth or proxy geometry estimates at test time. The protocol emphasizes metrics reflecting downstream relevance: iSQoE for perceptual comfort and MEt3R for geometric consistency. StereoSpace surpasses other methods from the warp & inpaint, latent-warping, and warped-conditioning categories, achieving sharp parallax and strong robustness on layered and non-Lambertian scenes. This establishes viewpoint-conditioned diffusion as a scalable, depth-free solution for stereo generation.
Similar Papers
GeoDiff: Geometry-Guided Diffusion for Metric Depth Estimation
CV and Pattern Recognition
Makes single-camera pictures show true distances.
DMS:Diffusion-Based Multi-Baseline Stereo Generation for Improving Self-Supervised Depth Estimation
CV and Pattern Recognition
Makes 3D pictures from two photos better.
StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation
CV and Pattern Recognition
Creates realistic 3D videos from normal ones.