Fast2comm:Collaborative perception combined with prior knowledge
By: Zhengbin Zhang, Yan Wu, Hongkun Zhang
Potential Business Impact:
Helps self-driving cars see better with less data.
Collaborative perception has the potential to significantly enhance perceptual accuracy through the sharing of complementary information among agents. However, real-world collaborative perception faces persistent challenges, particularly in balancing perception performance and bandwidth limitations, as well as coping with localization errors. To address these challenges, we propose Fast2comm, a prior knowledge-based collaborative perception framework. Specifically, (1)we propose a prior-supervised confidence feature generation method, that effectively distinguishes foreground from background by producing highly discriminative confidence features; (2)we propose GT Bounding Box-based spatial prior feature selection strategy to ensure that only the most informative prior-knowledge features are selected and shared, thereby minimizing background noise and optimizing bandwidth efficiency while enhancing adaptability to localization inaccuracies; (3)we decouple the feature fusion strategies between model training and testing phases, enabling dynamic bandwidth adaptation. To comprehensively validate our framework, we conduct extensive experiments on both real-world and simulated datasets. The results demonstrate the superior performance of our model and highlight the necessity of the proposed methods. Our code is available at https://github.com/Zhangzhengbin-TJ/Fast2comm.
Similar Papers
Which2comm: An Efficient Collaborative Perception Framework for 3D Object Detection
CV and Pattern Recognition
Cars share what they see to drive safer.
CoST: Efficient Collaborative Perception From Unified Spatiotemporal Perspective
CV and Pattern Recognition
Lets cars see around corners together.
Is Discretization Fusion All You Need for Collaborative Perception?
CV and Pattern Recognition
Helps self-driving cars see farther and better.