LLaDA-VLA: Vision Language Diffusion Action Models
By: Yuqing Wen , Hebei Li , Kefan Gu and more
Potential Business Impact:
Robots learn to do tasks by watching and reading.
The rapid progress of auto-regressive vision-language models (VLMs) has inspired growing interest in vision-language-action models (VLA) for robotic manipulation. Recently, masked diffusion models, a paradigm distinct from autoregressive models, have begun to demonstrate competitive performance in text generation and multimodal applications, leading to the development of a series of diffusion-based VLMs (d-VLMs). However, leveraging such models for robot policy learning remains largely unexplored. In this work, we present LLaDA-VLA, the first Vision-Language-Diffusion-Action model built upon pretrained d-VLMs for robotic manipulation. To effectively adapt d-VLMs to robotic domain, we introduce two key designs: (1) a localized special-token classification strategy that replaces full-vocabulary classification with special action token classification, reducing adaptation difficulty; (2) a hierarchical action-structured decoding strategy that decodes action sequences hierarchically considering the dependencies within and across actions. Extensive experiments demonstrate that LLaDA-VLA significantly outperforms state-of-the-art VLAs on both simulation and real-world robots.
Similar Papers
LLaDA-VLA: Vision Language Diffusion Action Models
Robotics
Robots learn to do tasks by watching and reading.
Pure Vision Language Action (VLA) Models: A Comprehensive Survey
Robotics
Robots learn to see, talk, and do tasks.
Vision-Language-Action Models for Robotics: A Review Towards Real-World Applications
Robotics
Robots learn new jobs by seeing and hearing.