CollabVLA: Self-Reflective Vision-Language-Action Model Dreaming Together with Human
By: Nan Sun , Yongchang Li , Chenxu Wang and more
Potential Business Impact:
Helps robots learn to help people better.
In this work, we present CollabVLA, a self-reflective vision-language-action framework that transforms a standard visuomotor policy into a collaborative assistant. CollabVLA tackles key limitations of prior VLAs, including domain overfitting, non-interpretable reasoning, and the high latency of auxiliary generative models, by integrating VLM-based reflective reasoning with diffusion-based action generation under a mixture-of-experts design. Through a two-stage training recipe of action grounding and reflection tuning, it supports explicit self-reflection and proactively solicits human guidance when confronted with uncertainty or repeated failure. It cuts normalized Time by ~2x and Dream counts by ~4x vs. generative agents, achieving higher success rates, improved interpretability, and balanced low latency compared with existing methods. This work takes a pioneering step toward shifting VLAs from opaque controllers to genuinely assistive agents capable of reasoning, acting, and collaborating with humans.
Similar Papers
DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge
CV and Pattern Recognition
Robots learn to do tasks by watching and thinking.
HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model
CV and Pattern Recognition
Robots follow instructions better, even in new situations.
Vision-Language-Action Models: Concepts, Progress, Applications and Challenges
CV and Pattern Recognition
Robots understand what they see and hear to act.