Masked Generative Policy for Robotic Control
By: Lipeng Zhuang , Shiyu Fan , Florent P. Audonnet and more
Potential Business Impact:
Robots learn to do tasks faster and better.
We present Masked Generative Policy (MGP), a novel framework for visuomotor imitation learning. We represent actions as discrete tokens, and train a conditional masked transformer that generates tokens in parallel and then rapidly refines only low-confidence tokens. We further propose two new sampling paradigms: MGP-Short, which performs parallel masked generation with score-based refinement for Markovian tasks, and MGP-Long, which predicts full trajectories in a single pass and dynamically refines low-confidence action tokens based on new observations. With globally coherent prediction and robust adaptive execution capabilities, MGP-Long enables reliable control on complex and non-Markovian tasks that prior methods struggle with. Extensive evaluations on 150 robotic manipulation tasks spanning the Meta-World and LIBERO benchmarks show that MGP achieves both rapid inference and superior success rates compared to state-of-the-art diffusion and autoregressive policies. Specifically, MGP increases the average success rate by 9% across 150 tasks while cutting per-sequence inference time by up to 35x. It further improves the average success rate by 60% in dynamic and missing-observation environments, and solves two non-Markovian scenarios where other state-of-the-art methods fail.
Similar Papers
RGMP: Recurrent Geometric-prior Multimodal Policy for Generalizable Humanoid Robot Manipulation
Robotics
Robots learn new tasks faster with less practice.
VGGT-DP: Generalizable Robot Control via Vision Foundation Models
Robotics
Robots learn to do tasks by watching.
MDG: Masked Denoising Generation for Multi-Agent Behavior Modeling in Traffic Environments
Robotics
Makes self-driving cars predict and plan better.