Learning from Online Videos at Inference Time for Computer-Use Agents
By: Yujian Liu , Ze Wang , Hao Chen and more
Potential Business Impact:
Teaches computers to learn tasks by watching videos.
Computer-use agents can operate computers and automate laborious tasks, but despite recent rapid progress, they still lag behind human users, especially when tasks require domain-specific procedural knowledge about particular applications, platforms, and multi-step workflows. Humans can bridge this gap by watching video tutorials: we search, skim, and selectively imitate short segments that match our current subgoal. In this paper, we study how to enable computer-use agents to learn from online videos at inference time effectively. We propose a framework that retrieves and filters tutorial videos, converts them into structured demonstration trajectories, and dynamically selects trajectories as in-context guidance during execution. Particularly, using a VLM, we infer UI actions, segment videos into short subsequences of actions, and assign each subsequence a textual objective. At inference time, a two-stage selection mechanism dynamically chooses a single trajectory to add in context at each step, focusing the agent on the most helpful local guidance for its next decision. Experiments on two widely used benchmarks show that our framework consistently outperforms strong base agents and variants that use only textual tutorials or transcripts. Analyses highlight the importance of trajectory segmentation and selection, action filtering, and visual information, suggesting that abundant online videos can be systematically distilled into actionable guidance that improves computer-use agents at inference time. Our code is available at https://github.com/UCSB-NLP-Chang/video_demo.
Similar Papers
Watch and Learn: Learning to Use Computers from Online Videos
Artificial Intelligence
Teaches computers to do tasks from watching videos.
VideoAgentTrek: Computer Use Pretraining from Unlabeled Videos
Computation and Language
Teaches computers to use apps from online videos.
SneakPeek: Future-Guided Instructional Streaming Video Generation
CV and Pattern Recognition
Makes videos show how to do things step-by-step.