ScenDi: 3D-to-2D Scene Diffusion Cascades for Urban Generation
By: Hanlei Guo , Jiahao Shao , Xinya Chen and more
Potential Business Impact:
Creates realistic city scenes from simple instructions.
Recent advancements in 3D object generation using diffusion models have achieved remarkable success, but generating realistic 3D urban scenes remains challenging. Existing methods relying solely on 3D diffusion models tend to suffer a degradation in appearance details, while those utilizing only 2D diffusion models typically compromise camera controllability. To overcome this limitation, we propose ScenDi, a method for urban scene generation that integrates both 3D and 2D diffusion models. We first train a 3D latent diffusion model to generate 3D Gaussians, enabling the rendering of images at a relatively low resolution. To enable controllable synthesis, this 3DGS generation process can be optionally conditioned by specifying inputs such as 3d bounding boxes, road maps, or text prompts. Then, we train a 2D video diffusion model to enhance appearance details conditioned on rendered images from the 3D Gaussians. By leveraging the coarse 3D scene as guidance for 2D video diffusion, ScenDi generates desired scenes based on input conditions and successfully adheres to accurate camera trajectories. Experiments on two challenging real-world datasets, Waymo and KITTI-360, demonstrate the effectiveness of our approach.
Similar Papers
GeoDiff3D: Self-Supervised 3D Scene Generation with Geometry-Constrained 2D Diffusion Guidance
CV and Pattern Recognition
Creates realistic 3D worlds from simple pictures.
GeoDiff3D: Self-Supervised 3D Scene Generation with Geometry-Constrained 2D Diffusion Guidance
CV and Pattern Recognition
Creates realistic 3D worlds from simple ideas.
Sat2City: 3D City Generation from A Single Satellite Image with Cascaded Latent Diffusion
CV and Pattern Recognition
Creates detailed 3D cities from satellite pictures.