LiDARCrafter: Dynamic 4D World Modeling from LiDAR Sequences
By: Ao Liang , Youquan Liu , Yu Yang and more
Potential Business Impact:
Makes self-driving cars "see" and change scenes.
Generative world models have become essential data engines for autonomous driving, yet most existing efforts focus on videos or occupancy grids, overlooking the unique LiDAR properties. Extending LiDAR generation to dynamic 4D world modeling presents challenges in controllability, temporal coherence, and evaluation standardization. To this end, we present LiDARCrafter, a unified framework for 4D LiDAR generation and editing. Given free-form natural language inputs, we parse instructions into ego-centric scene graphs, which condition a tri-branch diffusion network to generate object structures, motion trajectories, and geometry. These structured conditions enable diverse and fine-grained scene editing. Additionally, an autoregressive module generates temporally coherent 4D LiDAR sequences with smooth transitions. To support standardized evaluation, we establish a comprehensive benchmark with diverse metrics spanning scene-, object-, and sequence-level aspects. Experiments on the nuScenes dataset using this benchmark demonstrate that LiDARCrafter achieves state-of-the-art performance in fidelity, controllability, and temporal consistency across all levels, paving the way for data augmentation and simulation. The code and benchmark are released to the community.
Similar Papers
LiDARCrafter: Dynamic 4D World Modeling from LiDAR Sequences
CV and Pattern Recognition
Makes self-driving cars "see" and move better.
Learning to Generate 4D LiDAR Sequences
CV and Pattern Recognition
Creates 3D car sensor data from words.
DriveLiDAR4D: Sequential and Controllable LiDAR Scene Generation for Autonomous Driving
CV and Pattern Recognition
Creates realistic driving scenes for self-driving cars.