GEN3D: Generating Domain-Free 3D Scenes from a Single Image
By: Yuxin Zhang , Ziyu Lu , Hongbo Duan and more
Potential Business Impact:
Creates realistic 3D worlds from one picture.
Despite recent advancements in neural 3D reconstruction, the dependence on dense multi-view captures restricts their broader applicability. Additionally, 3D scene generation is vital for advancing embodied AI and world models, which depend on diverse, high-quality scenes for learning and evaluation. In this work, we propose Gen3d, a novel method for generation of high-quality, wide-scope, and generic 3D scenes from a single image. After the initial point cloud is created by lifting the RGBD image, Gen3d maintains and expands its world model. The 3D scene is finalized through optimizing a Gaussian splatting representation. Extensive experiments on diverse datasets demonstrate the strong generalization capability and superior performance of our method in generating a world model and Synthesizing high-fidelity and consistent novel views.
Similar Papers
SPATIALGEN: Layout-guided 3D Indoor Scene Generation
CV and Pattern Recognition
Builds realistic 3D rooms from pictures.
SceneGen: Single-Image 3D Scene Generation in One Feedforward Pass
CV and Pattern Recognition
Makes 3D worlds from one picture.
TRELLISWorld: Training-Free World Generation from Object Generators
CV and Pattern Recognition
Creates 3D worlds from text descriptions.