CoVeRaP: Cooperative Vehicular Perception through mmWave FMCW Radars
By: Jinyue Song , Hansol Ku , Jayneel Vora and more
Potential Business Impact:
Cars see better together, even in bad weather.
Automotive FMCW radars remain reliable in rain and glare, yet their sparse, noisy point clouds constrain 3-D object detection. We therefore release CoVeRaP, a 21 k-frame cooperative dataset that time-aligns radar, camera, and GPS streams from multiple vehicles across diverse manoeuvres. Built on this data, we propose a unified cooperative-perception framework with middle- and late-fusion options. Its baseline network employs a multi-branch PointNet-style encoder enhanced with self-attention to fuse spatial, Doppler, and intensity cues into a common latent space, which a decoder converts into 3-D bounding boxes and per-point depth confidence. Experiments show that middle fusion with intensity encoding boosts mean Average Precision by up to 9x at IoU 0.9 and consistently outperforms single-vehicle baselines. CoVeRaP thus establishes the first reproducible benchmark for multi-vehicle FMCW-radar perception and demonstrates that affordable radar sharing markedly improves detection robustness. Dataset and code are publicly available to encourage further research.
Similar Papers
Improving Multi-Vehicle Perception Fusion with Millimeter-Wave Radar Assistance
Robotics
Helps self-driving cars see better together.
MCOP: Multi-UAV Collaborative Occupancy Prediction
CV and Pattern Recognition
Drones see better together, even hidden things.
MCOP: Multi-UAV Collaborative Occupancy Prediction
CV and Pattern Recognition
Drones share what they see to avoid crashing.