Score: 2

UI-Vision: A Desktop-centric GUI Benchmark for Visual Perception and Interaction

Published: March 19, 2025 | arXiv ID: 2503.15661v2

By: Shravan Nayak , Xiangru Jian , Kevin Qinghong Lin and more

Potential Business Impact:

Helps computers learn to use programs like people.

Business Areas:
Image Recognition Data and Analytics, Software

Autonomous agents that navigate Graphical User Interfaces (GUIs) to automate tasks like document editing and file management can greatly enhance computer workflows. While existing research focuses on online settings, desktop environments, critical for many professional and everyday tasks, remain underexplored due to data collection challenges and licensing issues. We introduce UI-Vision, the first comprehensive, license-permissive benchmark for offline, fine-grained evaluation of computer use agents in real-world desktop environments. Unlike online benchmarks, UI-Vision provides: (i) dense, high-quality annotations of human demonstrations, including bounding boxes, UI labels, and action trajectories (clicks, drags, and keyboard inputs) across 83 software applications, and (ii) three fine-to-coarse grained tasks-Element Grounding, Layout Grounding, and Action Prediction-with well-defined metrics to rigorously evaluate agents' performance in desktop environments. Our evaluation reveals critical limitations in state-of-the-art models like UI-TARS-72B, including issues with understanding professional software, spatial reasoning, and complex actions like drag-and-drop. These findings highlight the challenges in developing fully autonomous computer use agents. By releasing UI-Vision as open-source, we aim to advance the development of more capable agents for real-world desktop tasks.

Country of Origin
🇨🇦 Canada

Repos / Data Links

Page Count
35 pages

Category
Computer Science:
CV and Pattern Recognition