4D-RaDiff: Latent Diffusion for 4D Radar Point Cloud Generation
By: Jimmie Kwok, Holger Caesar, Andras Palffy
Potential Business Impact:
Makes self-driving cars see better in fog.
Automotive radar has shown promising developments in environment perception due to its cost-effectiveness and robustness in adverse weather conditions. However, the limited availability of annotated radar data poses a significant challenge for advancing radar-based perception systems. To address this limitation, we propose a novel framework to generate 4D radar point clouds for training and evaluating object detectors. Unlike image-based diffusion, our method is designed to consider the sparsity and unique characteristics of radar point clouds by applying diffusion to a latent point cloud representation. Within this latent space, generation is controlled via conditioning at either the object or scene level. The proposed 4D-RaDiff converts unlabeled bounding boxes into high-quality radar annotations and transforms existing LiDAR point cloud data into realistic radar scenes. Experiments demonstrate that incorporating synthetic radar data of 4D-RaDiff as data augmentation method during training consistently improves object detection performance compared to training on real data only. In addition, pre-training on our synthetic data reduces the amount of required annotated radar data by up to 90% while achieving comparable object detection performance.
Similar Papers
RaLD: Generating High-Resolution 3D Radar Point Clouds with Latent Diffusion
CV and Pattern Recognition
Makes self-driving cars see better in fog.
Reproducing and Extending RaDelft 4D Radar with Camera-Assisted Labels
CV and Pattern Recognition
Makes self-driving cars see better in fog.
Sem-RaDiff: Diffusion-Based 3D Radar Semantic Perception in Cluttered Agricultural Environments
Robotics
Helps robots see through dirt and rain.