2nd Place Solution for CVPR2024 E2E Challenge: End-to-End Autonomous Driving Using Vision Language Model
By: Zilong Guo , Yi Luo , Long Sha and more
Potential Business Impact:
Cars drive themselves using just one camera.
End-to-end autonomous driving has drawn tremendous attention recently. Many works focus on using modular deep neural networks to construct the end-to-end archi-tecture. However, whether using powerful large language models (LLM), especially multi-modality Vision Language Models (VLM) could benefit the end-to-end driving tasks remain a question. In our work, we demonstrate that combining end-to-end architectural design and knowledgeable VLMs yield impressive performance on the driving tasks. It is worth noting that our method only uses a single camera and is the best camera-only solution across the leaderboard, demonstrating the effectiveness of vision-based driving approach and the potential for end-to-end driving tasks.
Similar Papers
SOLVE: Synergy of Language-Vision and End-to-End Networks for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars make better driving plans.
RoboDriveVLM: A Novel Benchmark and Baseline towards Robust Vision-Language Models for Autonomous Driving
Artificial Intelligence
Makes self-driving cars safer in bad weather.
VLM-E2E: Enhancing End-to-End Autonomous Driving with Multimodal Driver Attention Fusion
CV and Pattern Recognition
Helps self-driving cars understand roads like people.