Weakly-supervised Latent Models for Task-specific Visual-Language Control
By: Xian Yeow Lee , Lasitha Vidyaratne , Gregory Sin and more
Potential Business Impact:
Helps robots see and move objects precisely.
Autonomous inspection in hazardous environments requires AI agents that can interpret high-level goals and execute precise control. A key capability for such agents is spatial grounding, for example when a drone must center a detected object in its camera view to enable reliable inspection. While large language models provide a natural interface for specifying goals, using them directly for visual control achieves only 58\% success in this task. We envision that equipping agents with a world model as a tool would allow them to roll out candidate actions and perform better in spatially grounded settings, but conventional world models are data and compute intensive. To address this, we propose a task-specific latent dynamics model that learns state-specific action-induced shifts in a shared latent space using only goal-state supervision. The model leverages global action embeddings and complementary training losses to stabilize learning. In experiments, our approach achieves 71\% success and generalizes to unseen images and instructions, highlighting the potential of compact, domain-specific latent dynamics models for spatial alignment in autonomous inspection.
Similar Papers
Latent Action Pretraining Through World Modeling
Robotics
Teaches robots to do tasks from watching videos.
Language-Driven Hierarchical Task Structures as Explicit World Models for Multi-Agent Learning
Artificial Intelligence
Teaches robots to play soccer by explaining rules.
Latent-Space Autoregressive World Model for Efficient and Robust Image-Goal Navigation
Robotics
Makes robots navigate faster and smarter.