LSD-3D: Large-Scale 3D Driving Scene Generation with Geometry Grounding
By: Julian Ost , Andrea Ramazzina , Amogh Joshi and more
Potential Business Impact:
Creates 3D driving worlds for robots to learn.
Large-scale scene data is essential for training and testing in robot learning. Neural reconstruction methods have promised the capability of reconstructing large physically-grounded outdoor scenes from captured sensor data. However, these methods have baked-in static environments and only allow for limited scene control -- they are functionally constrained in scene and trajectory diversity by the captures from which they are reconstructed. In contrast, generating driving data with recent image or video diffusion models offers control, however, at the cost of geometry grounding and causality. In this work, we aim to bridge this gap and present a method that directly generates large-scale 3D driving scenes with accurate geometry, allowing for causal novel view synthesis with object permanence and explicit 3D geometry estimation. The proposed method combines the generation of a proxy geometry and environment representation with score distillation from learned 2D image priors. We find that this approach allows for high controllability, enabling the prompt-guided geometry and high-fidelity texture and structure that can be conditioned on map layouts -- producing realistic and geometrically consistent 3D generations of complex driving scenes.
Similar Papers
X-Scene: Large-Scale Driving Scene Generation with High Fidelity and Flexible Controllability
CV and Pattern Recognition
Creates realistic driving worlds for self-driving cars.
GEN3D: Generating Domain-Free 3D Scenes from a Single Image
CV and Pattern Recognition
Creates realistic 3D worlds from one picture.
Structured Interfaces for Automated Reasoning with 3D Scene Graphs
CV and Pattern Recognition
Robots understand spoken words by seeing objects.