MapFM: Foundation Model-Driven HD Mapping with Multi-Task Contextual Learning
By: Leonid Ivanov, Vasily Yuryev, Dmitry Yudin
Potential Business Impact:
Helps self-driving cars see and map roads.
In autonomous driving, high-definition (HD) maps and semantic maps in bird's-eye view (BEV) are essential for accurate localization, planning, and decision-making. This paper introduces an enhanced End-to-End model named MapFM for online vectorized HD map generation. We show significantly boost feature representation quality by incorporating powerful foundation model for encoding camera images. To further enrich the model's understanding of the environment and improve prediction quality, we integrate auxiliary prediction heads for semantic segmentation in the BEV representation. This multi-task learning approach provides richer contextual supervision, leading to a more comprehensive scene representation and ultimately resulting in higher accuracy and improved quality of the predicted vectorized HD maps. The source code is available at https://github.com/LIvanoff/MapFM.
Similar Papers
Predicting the Road Ahead: A Knowledge Graph based Foundation Model for Scene Understanding in Autonomous Driving
Computation and Language
Helps self-driving cars predict what happens next.
Control Map Distribution using Map Query Bank for Online Map Generation
CV and Pattern Recognition
Makes self-driving cars build maps faster.
Inferring Driving Maps by Deep Learning-based Trail Map Extraction
CV and Pattern Recognition
Makes self-driving cars learn roads from any car.