LatentVLA: Efficient Vision-Language Models for Autonomous Driving via Latent Action Prediction
By: Chengen Xie , Bin Sun , Tianyu Li and more
Potential Business Impact:
Teaches cars to drive safely in any situation.
End-to-end autonomous driving models trained on largescale datasets perform well in common scenarios but struggle with rare, long-tail situations due to limited scenario diversity. Recent Vision-Language-Action (VLA) models leverage broad knowledge from pre-trained visionlanguage models to address this limitation, yet face critical challenges: (1) numerical imprecision in trajectory prediction due to discrete tokenization, (2) heavy reliance on language annotations that introduce linguistic bias and annotation burden, and (3) computational inefficiency from multi-step chain-of-thought reasoning hinders real-time deployment. We propose LatentVLA, a novel framework that employs self-supervised latent action prediction to train VLA models without language annotations, eliminating linguistic bias while learning rich driving representations from unlabeled trajectory data. Through knowledge distillation, LatentVLA transfers the generalization capabilities of VLA models to efficient vision-based networks, achieving both robust performance and real-time efficiency. LatentVLA establishes a new state-of-the-art on the NAVSIM benchmark with a PDMS score of 92.4 and demonstrates strong zeroshot generalization on the nuScenes benchmark.
Similar Papers
Reasoning-VLA: A Fast and General Vision-Language-Action Reasoning Model for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars drive smarter and faster.
AutoVLA: A Vision-Language-Action Model for End-to-End Autonomous Driving with Adaptive Reasoning and Reinforcement Fine-Tuning
CV and Pattern Recognition
Helps self-driving cars plan safer, faster trips.
Vision-Language-Action Models for Autonomous Driving: Past, Present, and Future
Robotics
Teaches cars to drive by watching and understanding words.