MindDrive: A Vision-Language-Action Model for Autonomous Driving via Online Reinforcement Learning
By: Haoyu Fu , Diankun Zhang , Zongchuang Zhao and more
Current Vision-Language-Action (VLA) paradigms in autonomous driving primarily rely on Imitation Learning (IL), which introduces inherent challenges such as distribution shift and causal confusion. Online Reinforcement Learning offers a promising pathway to address these issues through trial-and-error learning. However, applying online reinforcement learning to VLA models in autonomous driving is hindered by inefficient exploration in continuous action spaces. To overcome this limitation, we propose MindDrive, a VLA framework comprising a large language model (LLM) with two distinct sets of LoRA parameters. The one LLM serves as a Decision Expert for scenario reasoning and driving decision-making, while the other acts as an Action Expert that dynamically maps linguistic decisions into feasible trajectories. By feeding trajectory-level rewards back into the reasoning space, MindDrive enables trial-and-error learning over a finite set of discrete linguistic driving decisions, instead of operating directly in a continuous action space. This approach effectively balances optimal decision-making in complex scenarios, human-like driving behavior, and efficient exploration in online reinforcement learning. Using the lightweight Qwen-0.5B LLM, MindDrive achieves Driving Score (DS) of 78.04 and Success Rate (SR) of 55.09% on the challenging Bench2Drive benchmark. To the best of our knowledge, this is the first work to demonstrate the effectiveness of online reinforcement learning for the VLA model in autonomous driving.
Similar Papers
MindDrive: A Vision-Language-Action Model for Autonomous Driving via Online Reinforcement Learning
CV and Pattern Recognition
Teaches self-driving cars to learn by trying.
DriveMind: A Dual-VLM based Reinforcement Learning Framework for Autonomous Driving
Robotics
Makes self-driving cars safer and smarter.
Reasoning-VLA: A Fast and General Vision-Language-Action Reasoning Model for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars drive smarter and faster.