OS-Marathon: Benchmarking Computer-Use Agents on Long-Horizon Repetitive Tasks
By: Jing Wu , Daphne Barretto , Yiye Chen and more
Potential Business Impact:
Teaches computers to do long, boring jobs faster.
Long-horizon, repetitive workflows are common in professional settings, such as processing expense reports from receipts and entering student grades from exam papers. These tasks are often tedious for humans since they can extend to extreme lengths proportional to the size of the data to process. However, they are ideal for Computer-Use Agents (CUAs) due to their structured, recurring sub-workflows with logic that can be systematically learned. Identifying the absence of an evaluation benchmark as a primary bottleneck, we establish OS-Marathon, comprising 242 long-horizon, repetitive tasks across 2 domains to evaluate state-of-the-art (SOTA) agents. We then introduce a cost-effective method to construct a condensed demonstration using only few-shot examples to teach agents the underlying workflow logic, enabling them to execute similar workflows effectively on larger, unseen data collections. Extensive experiments demonstrate both the inherent challenges of these tasks and the effectiveness of our proposed method. Project website: https://os-marathon.github.io/.
Similar Papers
OS-MAP: How Far Can Computer-Using Agents Go in Breadth and Depth?
Artificial Intelligence
Tests how well computers can do daily tasks.
OSWorld-Human: Benchmarking the Efficiency of Computer-Use Agents
Artificial Intelligence
Makes AI assistants complete computer tasks faster.
OS-Symphony: A Holistic Framework for Robust and Generalist Computer-Using Agent
Multiagent Systems
Helps robots learn and fix mistakes better.