SOLVE: Synergy of Language-Vision and End-to-End Networks for Autonomous Driving
By: Xuesong Chen , Linjiang Huang , Tao Ma and more
Potential Business Impact:
Helps self-driving cars make better driving plans.
The integration of Vision-Language Models (VLMs) into autonomous driving systems has shown promise in addressing key challenges such as learning complexity, interpretability, and common-sense reasoning. However, existing approaches often struggle with efficient integration and realtime decision-making due to computational demands. In this paper, we introduce SOLVE, an innovative framework that synergizes VLMs with end-to-end (E2E) models to enhance autonomous vehicle planning. Our approach emphasizes knowledge sharing at the feature level through a shared visual encoder, enabling comprehensive interaction between VLM and E2E components. We propose a Trajectory Chain-of-Thought (T-CoT) paradigm, which progressively refines trajectory predictions, reducing uncertainty and improving accuracy. By employing a temporal decoupling strategy, SOLVE achieves efficient cooperation by aligning high-quality VLM outputs with E2E real-time performance. Evaluated on the nuScenes dataset, our method demonstrates significant improvements in trajectory prediction accuracy, paving the way for more robust and reliable autonomous driving systems.
Similar Papers
2nd Place Solution for CVPR2024 E2E Challenge: End-to-End Autonomous Driving Using Vision Language Model
CV and Pattern Recognition
Cars drive themselves using just one camera.
FutureSightDrive: Thinking Visually with Spatio-Temporal CoT for Autonomous Driving
CV and Pattern Recognition
Cars learn to drive by imagining future road scenes.
VLM-E2E: Enhancing End-to-End Autonomous Driving with Multimodal Driver Attention Fusion
CV and Pattern Recognition
Helps self-driving cars understand roads like people.