AndroidLens: Long-latency Evaluation with Nested Sub-targets for Android GUI Agents
By: Yue Cao , Yingyao Wang , Pi Bu and more
Potential Business Impact:
Tests apps to make phones smarter and faster.
Graphical user interface (GUI) agents can substantially improve productivity by automating frequently executed long-latency tasks on mobile devices. However, existing evaluation benchmarks are still constrained to limited applications, simple tasks, and coarse-grained metrics. To address this, we introduce AndroidLens, a challenging evaluation framework for mobile GUI agents, comprising 571 long-latency tasks in both Chinese and English environments, each requiring an average of more than 26 steps to complete. The framework features: (1) tasks derived from real-world user scenarios across 38 domains, covering complex types such as multi-constraint, multi-goal, and domain-specific tasks; (2) static evaluation that preserves real-world anomalies and allows multiple valid paths to reduce bias; and (3) dynamic evaluation that employs a milestone-based scheme for fine-grained progress measurement via Average Task Progress (ATP). Our evaluation indicates that even the best models reach only a 12.7% task success rate and 50.47% ATP. We also underscore key challenges in real-world environments, including environmental anomalies, adaptive exploration, and long-term memory retention.
Similar Papers
ColorBench: Benchmarking Mobile Agents with Graph-Structured Framework for Complex Long-Horizon Tasks
Artificial Intelligence
Tests phone apps better, finds new ways to improve them.
Modular and Multi-Path-Aware Offline Benchmarking for Mobile GUI Agents
Artificial Intelligence
Helps AI apps test themselves on phones.
Step-GUI Technical Report
CV and Pattern Recognition
Teaches computers to control apps by watching you.