R2LDM: An Efficient 4D Radar Super-Resolution Framework Leveraging Diffusion Model
By: Boyuan Zheng , Shouyi Lu , Renbo Huang and more
Potential Business Impact:
Makes car sensors see more detail from radar.
We introduce R2LDM, an innovative approach for generating dense and accurate 4D radar point clouds, guided by corresponding LiDAR point clouds. Instead of utilizing range images or bird's eye view (BEV) images, we represent both LiDAR and 4D radar point clouds using voxel features, which more effectively capture 3D shape information. Subsequently, we propose the Latent Voxel Diffusion Model (LVDM), which performs the diffusion process in the latent space. Additionally, a novel Latent Point Cloud Reconstruction (LPCR) module is utilized to reconstruct point clouds from high-dimensional latent voxel features. As a result, R2LDM effectively generates LiDAR-like point clouds from paired raw radar data. We evaluate our approach on two different datasets, and the experimental results demonstrate that our model achieves 6- to 10-fold densification of radar point clouds, outperforming state-of-the-art baselines in 4D radar point cloud super-resolution. Furthermore, the enhanced radar point clouds generated by our method significantly improve downstream tasks, achieving up to 31.7% improvement in point cloud registration recall rate and 24.9% improvement in object detection accuracy.
Similar Papers
RaLD: Generating High-Resolution 3D Radar Point Clouds with Latent Diffusion
CV and Pattern Recognition
Makes self-driving cars see better in fog.
4D-RaDiff: Latent Diffusion for 4D Radar Point Cloud Generation
CV and Pattern Recognition
Makes self-driving cars see better in fog.
LiDAR Point Cloud Image-based Generation Using Denoising Diffusion Probabilistic Models
CV and Pattern Recognition
Makes self-driving cars see better in bad weather.