Latent Action World Models for Control with Unlabeled Trajectories
By: Marvin Alles , Xingyuan Zhang , Patrick van der Smagt and more
Potential Business Impact:
Teaches robots to learn from watching and doing.
Inspired by how humans combine direct interaction with action-free experience (e.g., videos), we study world models that learn from heterogeneous data. Standard world models typically rely on action-conditioned trajectories, which limits effectiveness when action labels are scarce. We introduce a family of latent-action world models that jointly use action-conditioned and action-free data by learning a shared latent action representation. This latent space aligns observed control signals with actions inferred from passive observations, enabling a single dynamics model to train on large-scale unlabeled trajectories while requiring only a small set of action-labeled ones. We use the latent-action world model to learn a latent-action policy through offline reinforcement learning (RL), thereby bridging two traditionally separate domains: offline RL, which typically relies on action-conditioned data, and action-free training, which is rarely used with subsequent RL. On the DeepMind Control Suite, our approach achieves strong performance while using about an order of magnitude fewer action-labeled samples than purely action-conditioned baselines. These results show that latent actions enable training on both passive and interactive data, which makes world models learn more efficiently.
Similar Papers
Co-Evolving Latent Action World Models
Machine Learning (CS)
Makes AI learn and control worlds better.
AdaWorld: Learning Adaptable World Models with Latent Actions
Artificial Intelligence
Teaches robots to learn new actions quickly.
Latent Action Pretraining Through World Modeling
Robotics
Teaches robots to do tasks from watching videos.