Vision-Language Models Unlock Task-Centric Latent Actions
By: Alexander Nikulin , Ilya Zisman , Albina Klepach and more
Potential Business Impact:
Teaches robots to ignore distractions and learn better.
Latent Action Models (LAMs) have rapidly gained traction as an important component in the pre-training pipelines of leading Vision-Language-Action models. However, they fail when observations contain action-correlated distractors, often encoding noise instead of meaningful latent actions. Humans, on the other hand, can effortlessly distinguish task-relevant motions from irrelevant details in any video given only a brief task description. In this work, we propose to utilize the common-sense reasoning abilities of Vision-Language Models (VLMs) to provide promptable representations, effectively separating controllable changes from the noise in unsupervised way. We use these representations as targets during LAM training and benchmark a wide variety of popular VLMs, revealing substantial variation in the quality of promptable representations as well as their robustness to different prompts and hyperparameters. Interestingly, we find that more recent VLMs may perform worse than older ones. Finally, we show that simply asking VLMs to ignore distractors can substantially improve latent action quality, yielding up to a six-fold increase in downstream success rates on Distracting MetaWorld.
Similar Papers
VisualActBench: Can VLMs See and Act like a Human?
CV and Pattern Recognition
Teaches computers to act smartly by just watching.
10 Open Challenges Steering the Future of Vision-Language-Action Models
Robotics
Robots learn to follow spoken commands and act.
mimic-video: Video-Action Models for Generalizable Robot Control Beyond VLAs
Robotics
Teaches robots to move by watching videos.