CLAP: Contrastive Latent Action Pretraining for Learning Vision-Language-Action Models from Human Videos
By: Chubin Zhang , Jianan Wang , Zifeng Gao and more
Potential Business Impact:
Teaches robots to do tasks from watching videos.
Generalist Vision-Language-Action models are currently hindered by the scarcity of robotic data compared to the abundance of human video demonstrations. Existing Latent Action Models attempt to leverage video data but often suffer from visual entanglement, capturing noise rather than manipulation skills. To address this, we propose Contrastive Latent Action Pretraining (CLAP), a framework that aligns the visual latent space from videos with a proprioceptive latent space from robot trajectories. By employing contrastive learning, CLAP maps video transitions onto a quantized, physically executable codebook. Building on this representation, we introduce a dual-formulation VLA framework offering both CLAP-NTP, an autoregressive model excelling at instruction following and object generalization, and CLAP-RF, a Rectified Flow-based policy designed for high-frequency, precise manipulation. Furthermore, we propose a Knowledge Matching (KM) regularization strategy to mitigate catastrophic forgetting during fine-tuning. Extensive experiments demonstrate that CLAP significantly outperforms strong baselines, enabling the effective transfer of skills from human videos to robotic execution. Project page: https://lin-shan.com/CLAP/.
Similar Papers
RoboAct-CLIP: Video-Driven Pre-training of Atomic Action Understanding for Robotics
Robotics
Teaches robots to do tasks by watching videos.
Latent Action Pretraining Through World Modeling
Robotics
Teaches robots to do tasks from watching videos.
MiVLA: Towards Generalizable Vision-Language-Action Model with Human-Robot Mutual Imitation Pre-training
Robotics
Robots learn to do tasks better by watching humans.