NeuRadar: Neural Radiance Fields for Automotive Radar Point Clouds
By: Mahan Rafidashti , Ji Lan , Maryam Fatemi and more
Potential Business Impact:
Makes self-driving cars see in fog.
Radar is an important sensor for autonomous driving (AD) systems due to its robustness to adverse weather and different lighting conditions. Novel view synthesis using neural radiance fields (NeRFs) has recently received considerable attention in AD due to its potential to enable efficient testing and validation but remains unexplored for radar point clouds. In this paper, we present NeuRadar, a NeRF-based model that jointly generates radar point clouds, camera images, and lidar point clouds. We explore set-based object detection methods such as DETR, and propose an encoder-based solution grounded in the NeRF geometry for improved generalizability. We propose both a deterministic and a probabilistic point cloud representation to accurately model the radar behavior, with the latter being able to capture radar's stochastic behavior. We achieve realistic reconstruction results for two automotive datasets, establishing a baseline for NeRF-based radar point cloud simulation models. In addition, we release radar data for ZOD's Sequences and Drives to enable further research in this field. To encourage further development of radar NeRFs, we release the source code for NeuRadar.
Similar Papers
Pose Optimization for Autonomous Driving Datasets using Neural Rendering Models
CV and Pattern Recognition
Makes self-driving cars safer by fixing map errors.
VDNeRF: Vision-only Dynamic Neural Radiance Field for Urban Scenes
CV and Pattern Recognition
Makes robots see moving things and know where they are.
RA-NeRF: Robust Neural Radiance Field Reconstruction with Accurate Camera Pose Estimation under Complex Trajectories
CV and Pattern Recognition
Makes 3D cameras see better in tricky spots.