OSWorld-Human: Benchmarking the Efficiency of Computer-Use Agents
By: Reyna Abhyankar, Qi Qi, Yiying Zhang
Potential Business Impact:
Makes AI assistants complete computer tasks faster.
Generative AI is being leveraged to solve a variety of computer-use tasks involving desktop applications. State-of-the-art systems have focused solely on improving accuracy on leading benchmarks. However, these systems are practically unusable due to extremely high end-to-end latency (e.g., tens of minutes) for tasks that typically take humans just a few minutes to complete. To understand the cause behind this and to guide future developments of computer agents, we conduct the first study on the temporal performance of computer-use agents on OSWorld, the flagship benchmark in computer-use AI. We find that large model calls for planning and reflection account for the majority of the overall latency, and as an agent uses more steps to complete a task, each successive step can take 3x longer than steps at the beginning of a task. We then construct OSWorld-Human, a manually annotated version of the original OSWorld dataset that contains a human-determined trajectory for each task. We evaluate 16 agents on their efficiency using OSWorld-Human and found that even the highest-scoring agents on OSWorld take 1.4-2.7x more steps than necessary.
Similar Papers
OS-MAP: How Far Can Computer-Using Agents Go in Breadth and Depth?
Artificial Intelligence
Tests how well computers can do daily tasks.
OS-Marathon: Benchmarking Computer-Use Agents on Long-Horizon Repetitive Tasks
CV and Pattern Recognition
Teaches computers to do long, boring jobs faster.
Continuous Benchmark Generation for Evaluating Enterprise-scale LLM Agents
Software Engineering
Creates better tests for smart computer helpers.