Score: 2

Seeing to Act, Prompting to Specify: A Bayesian Factorization of Vision Language Action Policy

Published: December 12, 2025 | arXiv ID: 2512.11218v1

By: Kechun Xu , Zhenjie Zhu , Anzhe Chen and more

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Helps robots learn new tasks from instructions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The pursuit of out-of-distribution generalization in Vision-Language-Action (VLA) models is often hindered by catastrophic forgetting of the Vision-Language Model (VLM) backbone during fine-tuning. While co-training with external reasoning data helps, it requires experienced tuning and data-related overhead. Beyond such external dependencies, we identify an intrinsic cause within VLA datasets: modality imbalance, where language diversity is much lower than visual and action diversity. This imbalance biases the model toward visual shortcuts and language forgetting. To address this, we introduce BayesVLA, a Bayesian factorization that decomposes the policy into a visual-action prior, supporting seeing-to-act, and a language-conditioned likelihood, enabling prompt-to-specify. This inherently preserves generalization and promotes instruction following. We further incorporate pre- and post-contact phases to better leverage pre-trained foundation models. Information-theoretic analysis formally validates our effectiveness in mitigating shortcut learning. Extensive experiments show superior generalization to unseen instructions, objects, and environments compared to existing methods. Project page is available at: https://xukechun.github.io/papers/BayesVLA.

Country of Origin
🇨🇳 🇺🇸 China, United States

Page Count
20 pages

Category
Computer Science:
Robotics