Dream-to-Recon: Monocular 3D Reconstruction with Diffusion-Depth Distillation from Single Images
By: Philipp Wulff , Felix Wimbauer , Dominik Muhle and more
Potential Business Impact:
Creates 3D scenes from one picture.
Volumetric scene reconstruction from a single image is crucial for a broad range of applications like autonomous driving and robotics. Recent volumetric reconstruction methods achieve impressive results, but generally require expensive 3D ground truth or multi-view supervision. We propose to leverage pre-trained 2D diffusion models and depth prediction models to generate synthetic scene geometry from a single image. This can then be used to distill a feed-forward scene reconstruction model. Our experiments on the challenging KITTI-360 and Waymo datasets demonstrate that our method matches or outperforms state-of-the-art baselines that use multi-view supervision, and offers unique advantages, for example regarding dynamic scenes.
Similar Papers
Enhancing Monocular 3D Scene Completion with Diffusion Model
Graphics
Turns one picture into a full 3D world.
Lightweight and Accurate Multi-View Stereo with Confidence-Aware Diffusion Model
CV and Pattern Recognition
Creates 3D shapes from pictures faster.
Light Transport-aware Diffusion Posterior Sampling for Single-View Reconstruction of 3D Volumes
CV and Pattern Recognition
Makes cloudy skies look real from one picture.