Self-Supervised Pre-training with Combined Datasets for 3D Perception in Autonomous Driving
By: Shumin Wang , Zhuoran Yang , Lidian Wang and more
Potential Business Impact:
Teaches self-driving cars to see in 3D.
The significant achievements of pre-trained models leveraging large volumes of data in the field of NLP and 2D vision inspire us to explore the potential of extensive data pre-training for 3D perception in autonomous driving. Toward this goal, this paper proposes to utilize massive unlabeled data from heterogeneous datasets to pre-train 3D perception models. We introduce a self-supervised pre-training framework that learns effective 3D representations from scratch on unlabeled data, combined with a prompt adapter based domain adaptation strategy to reduce dataset bias. The approach significantly improves model performance on downstream tasks such as 3D object detection, BEV segmentation, 3D object tracking, and occupancy prediction, and shows steady performance increase as the training data volume scales up, demonstrating the potential of continually benefit 3D perception models for autonomous driving. We will release the source code to inspire further investigations in the community.
Similar Papers
LargeAD: Large-Scale Cross-Sensor Data Pretraining for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars see and understand 3D world.
Learning-based 3D Reconstruction in Autonomous Driving: A Comprehensive Survey
CV and Pattern Recognition
Helps self-driving cars see and understand the world.
Unlock the Power of Unlabeled Data in Language Driving Model
CV and Pattern Recognition
Teaches self-driving cars with less data.