MM-ACT: Learn from Multimodal Parallel Generation to Act
By: Haotian Liang , Xinyi Chen , Bin Wang and more
Potential Business Impact:
Robots learn to do tasks by seeing, reading, and acting.
A generalist robotic policy needs both semantic understanding for task planning and the ability to interact with the environment through predictive capabilities. To tackle this, we present MM-ACT, a unified Vision-Language-Action (VLA) model that integrates text, image, and action in shared token space and performs generation across all three modalities. MM-ACT adopts a re-mask parallel decoding strategy for text and image generation, and employs a one-step parallel decoding strategy for action generation to improve efficiency. We introduce Context-Shared Multimodal Learning, a unified training paradigm that supervises generation in all three modalities from a shared context, enhancing action generation through cross-modal learning. Experiments were conducted on the LIBERO simulation and Franka real-robot setups as well as RoboTwin2.0 to assess in-domain and out-of-domain performances respectively. Our approach achieves a success rate of 96.3% on LIBERO, 72.0% across three tasks of real Franka, and 52.38% across eight bimanual tasks of RoboTwin2.0 with an additional gain of 9.25% from cross-modal learning. We release our codes, models and data at https://github.com/HHYHRHY/MM-ACT.
Similar Papers
ChatVLA: Unified Multimodal Understanding and Robot Control with Vision-Language-Action Model
Robotics
Robots learn to see, talk, and do tasks.
ThinkAct: Vision-Language-Action Reasoning via Reinforced Visual Latent Planning
CV and Pattern Recognition
Robots learn to plan and fix mistakes.
UniAct: Unified Motion Generation and Action Streaming for Humanoid Robots
CV and Pattern Recognition
Robots follow many kinds of commands instantly.