BEVCon: Advancing Bird's Eye View Perception with Contrastive Learning
By: Ziyang Leng , Jiawei Yang , Zhicheng Ren and more
Potential Business Impact:
Helps self-driving cars see better from above.
We present BEVCon, a simple yet effective contrastive learning framework designed to improve Bird's Eye View (BEV) perception in autonomous driving. BEV perception offers a top-down-view representation of the surrounding environment, making it crucial for 3D object detection, segmentation, and trajectory prediction tasks. While prior work has primarily focused on enhancing BEV encoders and task-specific heads, we address the underexplored potential of representation learning in BEV models. BEVCon introduces two contrastive learning modules: an instance feature contrast module for refining BEV features and a perspective view contrast module that enhances the image backbone. The dense contrastive learning designed on top of detection losses leads to improved feature representations across both the BEV encoder and the backbone. Extensive experiments on the nuScenes dataset demonstrate that BEVCon achieves consistent performance gains, achieving up to +2.4% mAP improvement over state-of-the-art baselines. Our results highlight the critical role of representation learning in BEV perception and offer a complementary avenue to conventional task-specific optimizations.
Similar Papers
Refine-and-Contrast: Adaptive Instance-Aware BEV Representations for Multi-UAV Collaborative Object Detection
CV and Pattern Recognition
Drones see better together, even with less power.
Bridging Perspectives: Foundation Model Guided BEV Maps for 3D Object Detection and Tracking
CV and Pattern Recognition
Helps self-driving cars see better in 3D.
An Initial Study of Bird's-Eye View Generation for Autonomous Vehicles using Cross-View Transformers
CV and Pattern Recognition
Helps self-driving cars see roads from above.