La La LiDAR: Large-Scale Layout Generation from LiDAR Data
By: Youquan Liu , Lingdong Kong , Weidong Yang and more
Potential Business Impact:
Makes self-driving cars "see" better in 3D.
Controllable generation of realistic LiDAR scenes is crucial for applications such as autonomous driving and robotics. While recent diffusion-based models achieve high-fidelity LiDAR generation, they lack explicit control over foreground objects and spatial relationships, limiting their usefulness for scenario simulation and safety validation. To address these limitations, we propose Large-scale Layout-guided LiDAR generation model ("La La LiDAR"), a novel layout-guided generative framework that introduces semantic-enhanced scene graph diffusion with relation-aware contextual conditioning for structured LiDAR layout generation, followed by foreground-aware control injection for complete scene generation. This enables customizable control over object placement while ensuring spatial and semantic consistency. To support our structured LiDAR generation, we introduce Waymo-SG and nuScenes-SG, two large-scale LiDAR scene graph datasets, along with new evaluation metrics for layout synthesis. Extensive experiments demonstrate that La La LiDAR achieves state-of-the-art performance in both LiDAR generation and downstream perception tasks, establishing a new benchmark for controllable 3D scene generation.
Similar Papers
LaGen: Towards Autoregressive LiDAR Scene Generation
CV and Pattern Recognition
Makes self-driving cars see better in 3D.
Learning to Generate 4D LiDAR Sequences
CV and Pattern Recognition
Creates 3D car sensor data from words.
DriveLiDAR4D: Sequential and Controllable LiDAR Scene Generation for Autonomous Driving
CV and Pattern Recognition
Creates realistic driving scenes for self-driving cars.