Vlaser: Vision-Language-Action Model with Synergistic Embodied Reasoning
By: Ganlin Yang , Tianyi Zhang , Haoran Hao and more
Potential Business Impact:
Teaches robots to understand and act.
While significant research has focused on developing embodied reasoning capabilities using Vision-Language Models (VLMs) or integrating advanced VLMs into Vision-Language-Action (VLA) models for end-to-end robot control, few studies directly address the critical gap between upstream VLM-based reasoning and downstream VLA policy learning. In this work, we take an initial step toward bridging embodied reasoning with VLA policy learning by introducing Vlaser - a Vision-Language-Action Model with synergistic embodied reasoning capability, which is a foundational vision-language model designed to integrate high-level reasoning with low-level control for embodied agents. Built upon the high-quality Vlaser-6M dataset, Vlaser achieves state-of-the-art performance across a range of embodied reasoning benchmarks - including spatial reasoning, embodied grounding, embodied QA, and task planning. Furthermore, we systematically examine how different VLM initializations affect supervised VLA fine-tuning, offering novel insights into mitigating the domain shift between internet-scale pre-training data and embodied-specific policy learning data. Based on these insights, our approach achieves state-of-the-art results on the WidowX benchmark and competitive performance on the Google Robot benchmark.
Similar Papers
DualVLA: Building a Generalizable Embodied Agent via Partial Decoupling of Reasoning and Action
CV and Pattern Recognition
Teaches robots to act and think better.
Survey of Vision-Language-Action Models for Embodied Manipulation
Robotics
Robots learn to do tasks by watching and acting.
10 Open Challenges Steering the Future of Vision-Language-Action Models
Robotics
Robots learn to follow spoken commands and act.