Score: 0

Galaxea Open-World Dataset and G0 Dual-System VLA Model

Published: August 30, 2025 | arXiv ID: 2509.00576v1

By: Tao Jiang , Tianyuan Yuan , Yicheng Liu and more

Potential Business Impact:

Robots learn to do tasks by watching people.

Business Areas:
Robotics Hardware, Science and Engineering, Software

We present Galaxea Open-World Dataset, a large-scale, diverse collection of robot behaviors recorded in authentic human living and working environments. All demonstrations are gathered using a consistent robotic embodiment, paired with precise subtask-level language annotations to facilitate both training and evaluation. Building on this dataset, we introduce G0, a dual-system framework that couples a Vision-Language Model (VLM) for multimodal planning with a Vision-Language-Action (VLA) model for fine-grained execution. G0 is trained using a three-stage curriculum: cross-embodiment pre-training, single-embodiment pre-training, and task-specific post-training. A comprehensive benchmark spanning tabletop manipulation, few-shot learning, and long-horizon mobile manipulation, demonstrates the effectiveness of our approach. In particular, we find that the single-embodiment pre-training stage, together with the Galaxea Open-World Dataset, plays a critical role in achieving strong performance.

Page Count
15 pages

Category
Computer Science:
Robotics