Elastic3D: Controllable Stereo Video Conversion with Guided Latent Decoding
By: Nando Metzger , Prune Truong , Goutam Bhat and more
Potential Business Impact:
Makes regular videos look 3D with adjustable depth.
The growing demand for immersive 3D content calls for automated monocular-to-stereo video conversion. We present Elastic3D, a controllable, direct end-to-end method for upgrading a conventional video to a binocular one. Our approach, based on (conditional) latent diffusion, avoids artifacts due to explicit depth estimation and warping. The key to its high-quality stereo video output is a novel, guided VAE decoder that ensures sharp and epipolar-consistent stereo video output. Moreover, our method gives the user control over the strength of the stereo effect (more precisely, the disparity range) at inference time, via an intuitive, scalar tuning knob. Experiments on three different datasets of real-world stereo videos show that our method outperforms both traditional warping-based and recent warping-free baselines and sets a new standard for reliable, controllable stereo video conversion. Please check the project page for the video samples https://elastic3d.github.io.
Similar Papers
StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation
CV and Pattern Recognition
Creates realistic 3D videos from normal ones.
StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation
CV and Pattern Recognition
Makes normal videos look like 3D movies.
S^2VG: 3D Stereoscopic and Spatial Video Generation via Denoising Frame Matrix
CV and Pattern Recognition
Makes normal videos feel 3D and real.