Self-Evolving 3D Scene Generation from a Single Image
By: Kaizhi Zheng , Yue Fan , Jing Gu and more
Potential Business Impact:
Creates 3D worlds from one picture.
Generating high-quality, textured 3D scenes from a single image remains a fundamental challenge in vision and graphics. Recent image-to-3D generators recover reasonable geometry from single views, but their object-centric training limits generalization to complex, large-scale scenes with faithful structure and texture. We present EvoScene, a self-evolving, training-free framework that progressively reconstructs complete 3D scenes from single images. The key idea is combining the complementary strengths of existing models: geometric reasoning from 3D generation models and visual knowledge from video generation models. Through three iterative stages--Spatial Prior Initialization, Visual-guided 3D Scene Mesh Generation, and Spatial-guided Novel View Generation--EvoScene alternates between 2D and 3D domains, gradually improving both structure and appearance. Experiments on diverse scenes demonstrate that EvoScene achieves superior geometric stability, view-consistent textures, and unseen-region completion compared to strong baselines, producing ready-to-use 3D meshes for practical applications.
Similar Papers
GEN3D: Generating Domain-Free 3D Scenes from a Single Image
CV and Pattern Recognition
Creates realistic 3D worlds from one picture.
SceneGen: Single-Image 3D Scene Generation in One Feedforward Pass
CV and Pattern Recognition
Makes 3D worlds from one picture.
TRELLISWorld: Training-Free World Generation from Object Generators
CV and Pattern Recognition
Builds 3D worlds from just words.