SparseAlign: A Fully Sparse Framework for Cooperative Object Detection
By: Yunshuang Yuan , Yan Xia , Daniel Cremers and more
Potential Business Impact:
Helps self-driving cars see farther and safer.
Cooperative perception can increase the view field and decrease the occlusion of an ego vehicle, hence improving the perception performance and safety of autonomous driving. Despite the success of previous works on cooperative object detection, they mostly operate on dense Bird's Eye View (BEV) feature maps, which are computationally demanding and can hardly be extended to long-range detection problems. More efficient fully sparse frameworks are rarely explored. In this work, we design a fully sparse framework, SparseAlign, with three key features: an enhanced sparse 3D backbone, a query-based temporal context learning module, and a robust detection head specially tailored for sparse features. Extensive experimental results on both OPV2V and DairV2X datasets show that our framework, despite its sparsity, outperforms the state of the art with less communication bandwidth requirements. In addition, experiments on the OPV2Vt and DairV2Xt datasets for time-aligned cooperative object detection also show a significant performance gain compared to the baseline works.
Similar Papers
SparseCoop: Cooperative Perception with Kinematic-Grounded Queries
CV and Pattern Recognition
Cars share data to see around corners.
Sparse Multiview Open-Vocabulary 3D Detection
CV and Pattern Recognition
Lets computers see and find objects in 3D.
SlimComm: Doppler-Guided Sparse Queries for Bandwidth-Efficient Cooperative 3-D Perception
CV and Pattern Recognition
Cars share less data, see around corners.