DualDiff: Dual-branch Diffusion Model for Autonomous Driving with Semantic Fusion
By: Haoteng Li , Zhao Yang , Zezhong Qian and more
Potential Business Impact:
Makes self-driving cars see better in 3D.
Accurate and high-fidelity driving scene reconstruction relies on fully leveraging scene information as conditioning. However, existing approaches, which primarily use 3D bounding boxes and binary maps for foreground and background control, fall short in capturing the complexity of the scene and integrating multi-modal information. In this paper, we propose DualDiff, a dual-branch conditional diffusion model designed to enhance multi-view driving scene generation. We introduce Occupancy Ray Sampling (ORS), a semantic-rich 3D representation, alongside numerical driving scene representation, for comprehensive foreground and background control. To improve cross-modal information integration, we propose a Semantic Fusion Attention (SFA) mechanism that aligns and fuses features across modalities. Furthermore, we design a foreground-aware masked (FGM) loss to enhance the generation of tiny objects. DualDiff achieves state-of-the-art performance in FID score, as well as consistently better results in downstream BEV segmentation and 3D object detection tasks.
Similar Papers
DualDiff+: Dual-Branch Diffusion for High-Fidelity Video Generation with Reward Guidance
CV and Pattern Recognition
Creates realistic driving scenes for self-driving cars.
DiffSemanticFusion: Semantic Raster BEV Fusion for Autonomous Driving via Online HD Map Diffusion
CV and Pattern Recognition
Helps self-driving cars see roads more clearly.
Underlying Semantic Diffusion for Effective and Efficient In-Context Learning
CV and Pattern Recognition
Makes AI draw better pictures, faster.