"Are We Done Yet?": A Vision-Based Judge for Autonomous Task Completion of Computer Use Agents
By: Marta Sumyk, Oleksandr Kosovan
Potential Business Impact:
Helps computers know when they finish tasks.
Computer Use Agents (CUAs) are designed to autonomously operate digital interfaces, yet they often fail to reliably determine whether a given task has been completed. We present an autonomous evaluation and feedback framework that uses vision-language models to assess task completion directly from screenshots and task descriptions. Our dataset covers 42 built-in macOS applications and 1,260 human-labeled tasks across a wide range of scenarios. Our framework achieves up to 73 percent accuracy in task success detection and yields an average relative improvement of 27 percent in overall task success when evaluator feedback is applied. These results show that vision-based evaluation can serve as an effective feedback mechanism that improves the reliability and self-correction of autonomous computer-use agents.
Similar Papers
Computer-Use Agents as Judges for Generative User Interface
CV and Pattern Recognition
Computers design better websites for other computers.
OpenCUA: Open Foundations for Computer-Use Agents
Artificial Intelligence
Lets computers learn to do tasks on your screen.
OpenCUA: Open Foundations for Computer-Use Agents
Artificial Intelligence
Lets computers learn to do tasks like people.