DriveVLA-W0: World Models Amplify Data Scaling Law in Autonomous Driving
By: Yingyan Li , Shuyao Shang , Weisong Liu and more
Potential Business Impact:
Teaches self-driving cars to predict and drive better.
Scaling Vision-Language-Action (VLA) models on large-scale data offers a promising path to achieving a more generalized driving intelligence. However, VLA models are limited by a ``supervision deficit'': the vast model capacity is supervised by sparse, low-dimensional actions, leaving much of their representational power underutilized. To remedy this, we propose \textbf{DriveVLA-W0}, a training paradigm that employs world modeling to predict future images. This task generates a dense, self-supervised signal that compels the model to learn the underlying dynamics of the driving environment. We showcase the paradigm's versatility by instantiating it for two dominant VLA archetypes: an autoregressive world model for VLAs that use discrete visual tokens, and a diffusion world model for those operating on continuous visual features. Building on the rich representations learned from world modeling, we introduce a lightweight action expert to address the inference latency for real-time deployment. Extensive experiments on the NAVSIM v1/v2 benchmark and a 680x larger in-house dataset demonstrate that DriveVLA-W0 significantly outperforms BEV and VLA baselines. Crucially, it amplifies the data scaling law, showing that performance gains accelerate as the training dataset size increases.
Similar Papers
AutoVLA: A Vision-Language-Action Model for End-to-End Autonomous Driving with Adaptive Reasoning and Reinforcement Fine-Tuning
CV and Pattern Recognition
Helps self-driving cars plan safer, faster trips.
IRL-VLA: Training an Vision-Language-Action Policy via Reward World Model
Artificial Intelligence
Teaches self-driving cars to drive safely and efficiently.
Reasoning-VLA: A Fast and General Vision-Language-Action Reasoning Model for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars drive smarter and faster.