Score: 1

DriveGen3D: Boosting Feed-Forward Driving Scene Generation with Efficient Video Diffusion

Published: October 17, 2025 | arXiv ID: 2510.15264v1

By: Weijie Wang , Jiagang Zhu , Zeyu Zhang and more

Potential Business Impact:

Makes realistic 3D driving videos and worlds.

Business Areas:
Image Recognition Data and Analytics, Software

We present DriveGen3D, a novel framework for generating high-quality and highly controllable dynamic 3D driving scenes that addresses critical limitations in existing methodologies. Current approaches to driving scene synthesis either suffer from prohibitive computational demands for extended temporal generation, focus exclusively on prolonged video synthesis without 3D representation, or restrict themselves to static single-scene reconstruction. Our work bridges this methodological gap by integrating accelerated long-term video generation with large-scale dynamic scene reconstruction through multimodal conditional control. DriveGen3D introduces a unified pipeline consisting of two specialized components: FastDrive-DiT, an efficient video diffusion transformer for high-resolution, temporally coherent video synthesis under text and Bird's-Eye-View (BEV) layout guidance; and FastRecon3D, a feed-forward reconstruction module that rapidly builds 3D Gaussian representations across time, ensuring spatial-temporal consistency. Together, these components enable real-time generation of extended driving videos (up to $424\times800$ at 12 FPS) and corresponding dynamic 3D scenes, achieving SSIM of 0.811 and PSNR of 22.84 on novel view synthesis, all while maintaining parameter efficiency.

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition