FreeGen: Feed-Forward Reconstruction-Generation Co-Training for Free-Viewpoint Driving Scene Synthesis
By: Shijie Chen, Peixi Peng
Potential Business Impact:
Creates realistic driving videos from any angle.
Closed-loop simulation and scalable pre-training for autonomous driving require synthesizing free-viewpoint driving scenes. However, existing datasets and generative pipelines rarely provide consistent off-trajectory observations, limiting large-scale evaluation and training. While recent generative models demonstrate strong visual realism, they struggle to jointly achieve interpolation consistency and extrapolation realism without per-scene optimization. To address this, we propose FreeGen, a feed-forward reconstruction-generation co-training framework for free-viewpoint driving scene synthesis. The reconstruction model provides stable geometric representations to ensure interpolation consistency, while the generation model performs geometry-aware enhancement to improve realism at unseen viewpoints. Through co-training, generative priors are distilled into the reconstruction model to improve off-trajectory rendering, and the refined geometry in turn offers stronger structural guidance for generation. Experiments demonstrate that FreeGen achieves state-of-the-art performance for free-viewpoint driving scene synthesis.
Similar Papers
DriveGen3D: Boosting Feed-Forward Driving Scene Generation with Efficient Video Diffusion
CV and Pattern Recognition
Makes realistic 3D driving videos and worlds.
DGGT: Feedforward 4D Reconstruction of Dynamic Driving Scenes using Unposed Images
CV and Pattern Recognition
Lets self-driving cars see and remember 3D scenes.
ReCamDriving: LiDAR-Free Camera-Controlled Novel Trajectory Video Generation
CV and Pattern Recognition
Creates realistic videos of cars driving anywhere.