Galaxea Open-World Dataset and G0 Dual-System VLA Model
By: Tao Jiang , Tianyuan Yuan , Yicheng Liu and more
Potential Business Impact:
Robots learn to do tasks by watching people.
We present Galaxea Open-World Dataset, a large-scale, diverse collection of robot behaviors recorded in authentic human living and working environments. All demonstrations are gathered using a consistent robotic embodiment, paired with precise subtask-level language annotations to facilitate both training and evaluation. Building on this dataset, we introduce G0, a dual-system framework that couples a Vision-Language Model (VLM) for multimodal planning with a Vision-Language-Action (VLA) model for fine-grained execution. G0 is trained using a three-stage curriculum: cross-embodiment pre-training, single-embodiment pre-training, and task-specific post-training. A comprehensive benchmark spanning tabletop manipulation, few-shot learning, and long-horizon mobile manipulation, demonstrates the effectiveness of our approach. In particular, we find that the single-embodiment pre-training stage, together with the Galaxea Open-World Dataset, plays a critical role in achieving strong performance.
Similar Papers
GigaBrain-0: A World Model-Powered Vision-Language-Action Model
Robotics
Robots learn tasks faster with fake robot videos.
GigaWorld-0: World Models as Data Engine to Empower Embodied AI
CV and Pattern Recognition
Makes robots learn tasks without real-world practice.
GigaWorld-0: World Models as Data Engine to Empower Embodied AI
CV and Pattern Recognition
Creates realistic robot practice worlds for learning.