WonderZoom: Multi-Scale 3D World Generation
By: Jin Cao, Hong-Xing Yu, Jiajun Wu
Potential Business Impact:
Makes one picture zoom into a whole 3D world.
We present WonderZoom, a novel approach to generating 3D scenes with contents across multiple spatial scales from a single image. Existing 3D world generation models remain limited to single-scale synthesis and cannot produce coherent scene contents at varying granularities. The fundamental challenge is the lack of a scale-aware 3D representation capable of generating and rendering content with largely different spatial sizes. WonderZoom addresses this through two key innovations: (1) scale-adaptive Gaussian surfels for generating and real-time rendering of multi-scale 3D scenes, and (2) a progressive detail synthesizer that iteratively generates finer-scale 3D contents. Our approach enables users to "zoom into" a 3D region and auto-regressively synthesize previously non-existent fine details from landscapes to microscopic features. Experiments demonstrate that WonderZoom significantly outperforms state-of-the-art video and 3D models in both quality and alignment, enabling multi-scale 3D world creation from a single image. We show video results and an interactive viewer of generated multi-scale 3D worlds in https://wonderzoom.github.io/
Similar Papers
WorldGrow: Generating Infinite 3D World
CV and Pattern Recognition
Builds endless, realistic 3D worlds for games.
WonderVerse: Extendable 3D Scene Generation with Video Generative Models
CV and Pattern Recognition
Creates realistic, big 3D worlds from videos.
FlexWorld: Progressively Expanding 3D Scenes for Flexiable-View Synthesis
CV and Pattern Recognition
Turns one picture into a 3D world.