R$^3$D: Regional-guided Residual Radar Diffusion
By: Hao Li, Xinqi Liu, Yaoqing Jin
Millimeter-wave radar enables robust environment perception in autonomous systems under adverse conditions yet suffers from sparse, noisy point clouds with low angular resolution. Existing diffusion-based radar enhancement methods either incur high learning complexity by modeling full LiDAR distributions or fail to prioritize critical structures due to uniform regional processing. To address these issues, we propose R3D, a regional-guided residual radar diffusion framework that integrates residual diffusion modeling-focusing on the concentrated LiDAR-radar residual encoding complementary high-frequency details to reduce learning difficulty-and sigma-adaptive regional guidance-leveraging radar-specific signal properties to generate attention maps and applying lightweight guidance only in low-noise stages to avoid gradient imbalance while refining key regions. Extensive experiments on the ColoRadar dataset demonstrate that R3D outperforms state-of-the-art methods, providing a practical solution for radar perception enhancement. Our anonymous code and pretrained models are released here: https://anonymous.4open.science/r/r3d-F836
Similar Papers
RaLD: Generating High-Resolution 3D Radar Point Clouds with Latent Diffusion
CV and Pattern Recognition
Makes self-driving cars see better in fog.
Diffusion-Based mmWave Radar Point Cloud Enhancement Driven by Range Images
Robotics
Makes car sensors see better in bad weather.
4D-RaDiff: Latent Diffusion for 4D Radar Point Cloud Generation
CV and Pattern Recognition
Makes self-driving cars see better in fog.