Realistic and Controllable 3D Gaussian-Guided Object Editing for Driving Video Generation
By: Jiusi Li , Jackson Jiang , Jinyu Miao and more
Potential Business Impact:
Makes self-driving cars practice tricky situations safely.
Corner cases are crucial for training and validating autonomous driving systems, yet collecting them from the real world is often costly and hazardous. Editing objects within captured sensor data offers an effective alternative for generating diverse scenarios, commonly achieved through 3D Gaussian Splatting or image generative models. However, these approaches often suffer from limited visual fidelity or imprecise pose control. To address these issues, we propose G^2Editor, a framework designed for photorealistic and precise object editing in driving videos. Our method leverages a 3D Gaussian representation of the edited object as a dense prior, injected into the denoising process to ensure accurate pose control and spatial consistency. A scene-level 3D bounding box layout is employed to reconstruct occluded areas of non-target objects. Furthermore, to guide the appearance details of the edited object, we incorporate hierarchical fine-grained features as additional conditions during generation. Experiments on the Waymo Open Dataset demonstrate that G^2Editor effectively supports object repositioning, insertion, and deletion within a unified framework, outperforming existing methods in both pose controllability and visual quality, while also benefiting downstream data-driven tasks.
Similar Papers
DrivingGaussian++: Towards Realistic Reconstruction and Editable Simulation for Surrounding Dynamic Driving Scenes
CV and Pattern Recognition
Makes self-driving cars see and change driving scenes.
GaussEdit: Adaptive 3D Scene Editing with Text and Image Prompts
Graphics
Changes 3D scenes with words and pictures.
InstDrive: Instance-Aware 3D Gaussian Splatting for Driving Scenes
CV and Pattern Recognition
Lets cars understand and edit driving scenes.