Learning Skills from Action-Free Videos
By: Hung-Chieh Fang , Kuo-Han Hung , Chu-Rong Chen and more
Learning from videos offers a promising path toward generalist robots by providing rich visual and temporal priors beyond what real robot datasets contain. While existing video generative models produce impressive visual predictions, they are difficult to translate into low-level actions. Conversely, latent-action models better align videos with actions, but they typically operate at the single-step level and lack high-level planning capabilities. We bridge this gap by introducing Skill Abstraction from Optical Flow (SOF), a framework that learns latent skills from large collections of action-free videos. Our key idea is to learn a latent skill space through an intermediate representation based on optical flow that captures motion information aligned with both video dynamics and robot actions. By learning skills in this flow-based latent space, SOF enables high-level planning over video-derived skills and allows for easier translation of these skills into actions. Experiments show that our approach consistently improves performance in both multitask and long-horizon settings, demonstrating the ability to acquire and compose skills directly from raw visual data.
Similar Papers
LatBot: Distilling Universal Latent Actions for Vision-Language-Action Models
Robotics
Teaches robots to do new jobs with little practice.
ViSA-Flow: Accelerating Robot Skill Learning via Large-Scale Video Semantic Action Flow
Robotics
Robots learn to do tasks by watching videos.
LAOF: Robust Latent Action Learning with Optical Flow Constraints
Robotics
Teaches robots to learn actions from videos.