Score: 0

4D-RaDiff: Latent Diffusion for 4D Radar Point Cloud Generation

Published: December 16, 2025 | arXiv ID: 2512.14235v1

By: Jimmie Kwok, Holger Caesar, Andras Palffy

Potential Business Impact:

Makes self-driving cars see better in fog.

Business Areas:
Image Recognition Data and Analytics, Software

Automotive radar has shown promising developments in environment perception due to its cost-effectiveness and robustness in adverse weather conditions. However, the limited availability of annotated radar data poses a significant challenge for advancing radar-based perception systems. To address this limitation, we propose a novel framework to generate 4D radar point clouds for training and evaluating object detectors. Unlike image-based diffusion, our method is designed to consider the sparsity and unique characteristics of radar point clouds by applying diffusion to a latent point cloud representation. Within this latent space, generation is controlled via conditioning at either the object or scene level. The proposed 4D-RaDiff converts unlabeled bounding boxes into high-quality radar annotations and transforms existing LiDAR point cloud data into realistic radar scenes. Experiments demonstrate that incorporating synthetic radar data of 4D-RaDiff as data augmentation method during training consistently improves object detection performance compared to training on real data only. In addition, pre-training on our synthetic data reduces the amount of required annotated radar data by up to 90% while achieving comparable object detection performance.

Page Count
19 pages

Category
Computer Science:
CV and Pattern Recognition