GEN3D: Generating Domain-Free 3D Scenes from a Single Image
By: Yuxin Zhang , Ziyu Lu , Hongbo Duan and more
Potential Business Impact:
Creates realistic 3D worlds from one picture.
Despite recent advancements in neural 3D reconstruction, the dependence on dense multi-view captures restricts their broader applicability. Additionally, 3D scene generation is vital for advancing embodied AI and world models, which depend on diverse, high-quality scenes for learning and evaluation. In this work, we propose Gen3d, a novel method for generation of high-quality, wide-scope, and generic 3D scenes from a single image. After the initial point cloud is created by lifting the RGBD image, Gen3d maintains and expands its world model. The 3D scene is finalized through optimizing a Gaussian splatting representation. Extensive experiments on diverse datasets demonstrate the strong generalization capability and superior performance of our method in generating a world model and Synthesizing high-fidelity and consistent novel views.
Similar Papers
Self-Evolving 3D Scene Generation from a Single Image
CV and Pattern Recognition
Creates 3D worlds from one picture.
Gen3R: 3D Scene Generation Meets Feed-Forward Reconstruction
CV and Pattern Recognition
Creates 3D worlds from pictures and videos.
3D-RE-GEN: 3D Reconstruction of Indoor Scenes with a Generative Framework
CV and Pattern Recognition
Builds 3D worlds from one picture.