Score: 0

Vision-Language Models Unlock Task-Centric Latent Actions

Published: January 30, 2026 | arXiv ID: 2601.22714v1

By: Alexander Nikulin , Ilya Zisman , Albina Klepach and more

Potential Business Impact:

Teaches robots to ignore distractions and learn better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Latent Action Models (LAMs) have rapidly gained traction as an important component in the pre-training pipelines of leading Vision-Language-Action models. However, they fail when observations contain action-correlated distractors, often encoding noise instead of meaningful latent actions. Humans, on the other hand, can effortlessly distinguish task-relevant motions from irrelevant details in any video given only a brief task description. In this work, we propose to utilize the common-sense reasoning abilities of Vision-Language Models (VLMs) to provide promptable representations, effectively separating controllable changes from the noise in unsupervised way. We use these representations as targets during LAM training and benchmark a wide variety of popular VLMs, revealing substantial variation in the quality of promptable representations as well as their robustness to different prompts and hyperparameters. Interestingly, we find that more recent VLMs may perform worse than older ones. Finally, we show that simply asking VLMs to ignore distractors can substantially improve latent action quality, yielding up to a six-fold increase in downstream success rates on Distracting MetaWorld.

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)