Score: 1

LatentVLA: Efficient Vision-Language Models for Autonomous Driving via Latent Action Prediction

Published: January 9, 2026 | arXiv ID: 2601.05611v1

By: Chengen Xie , Bin Sun , Tianyu Li and more

Potential Business Impact:

Teaches cars to drive safely in any situation.

Business Areas:
Autonomous Vehicles Transportation

End-to-end autonomous driving models trained on largescale datasets perform well in common scenarios but struggle with rare, long-tail situations due to limited scenario diversity. Recent Vision-Language-Action (VLA) models leverage broad knowledge from pre-trained visionlanguage models to address this limitation, yet face critical challenges: (1) numerical imprecision in trajectory prediction due to discrete tokenization, (2) heavy reliance on language annotations that introduce linguistic bias and annotation burden, and (3) computational inefficiency from multi-step chain-of-thought reasoning hinders real-time deployment. We propose LatentVLA, a novel framework that employs self-supervised latent action prediction to train VLA models without language annotations, eliminating linguistic bias while learning rich driving representations from unlabeled trajectory data. Through knowledge distillation, LatentVLA transfers the generalization capabilities of VLA models to efficient vision-based networks, achieving both robust performance and real-time efficiency. LatentVLA establishes a new state-of-the-art on the NAVSIM benchmark with a PDMS score of 92.4 and demonstrates strong zeroshot generalization on the nuScenes benchmark.

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition