VideoAgentTrek: Computer Use Pretraining from Unlabeled Videos
By: Dunjie Lu , Yiheng Xu , Junli Wang and more
Potential Business Impact:
Teaches computers to use apps from online videos.
Training computer-use agents requires massive amounts of GUI interaction data, but manually annotating action trajectories at scale is prohibitively expensive. We present VideoAgentTrek, a scalable pipeline that automatically mines training data from publicly available screen-recorded videos at web scale, eliminating the need for manual annotation. Our approach addresses a key challenge: raw videos contain implicit demonstrations but lack explicit action labels. To solve this, we develop Video2Action, an inverse dynamics module (IDM) with two components: (1) a video grounding model that detects and localizes GUI actions with precise temporal boundaries and context, and (2) an action-content recognizer that extracts structured parameters like click coordinates and typed text with high fidelity. Applied to 39,000 YouTube tutorial videos, our pipeline generates 1.52 million interaction steps automatically. We leverage this data through continued pretraining followed by supervised fine-tuning. On OSWorld-Verified, our approach improves task success rates from 9.3% (SFT-only baseline) to 15.8%, a 70% relative improvement. On AgentNetBench, step accuracy increases from 64.1% to 69.3%. Our results demonstrate that passive internet videos can be transformed into high-quality supervision for computer-use agents, providing a scalable alternative to expensive manual annotation.
Similar Papers
Learning from Online Videos at Inference Time for Computer-Use Agents
CV and Pattern Recognition
Teaches computers to learn tasks by watching videos.
Watch and Learn: Learning to Use Computers from Online Videos
Artificial Intelligence
Teaches computers to do tasks from watching videos.
VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation
Robotics
Teaches robots to do tasks from watching videos.