Zoom in, Click out: Unlocking and Evaluating the Potential of Zooming for GUI Grounding
By: Zhiyuan Jiang , Shenghao Xie , Wenyi Li and more
Potential Business Impact:
Helps computers understand what's on a screen.
Grounding is a fundamental capability for building graphical user interface (GUI) agents. Although existing approaches rely on large-scale bounding box supervision, they still face various challenges, such as cross-platform generalization, complex layout analysis, and fine-grained element localization. In this paper, we investigate zoom as a strong yet underexplored prior for GUI grounding, and propose a training-free method, ZoomClick. By characterizing four key properties of zoom (i.e., pre-zoom, depth, shrink size, minimal crop size), we unlock its full capabilities for dynamic spatial focusing and adaptive context switching. Experiments demonstrate that our method significantly boosts the performance of both general vision-language and specialized GUI grounding models, achieving state-of-the-art results on several mainstream benchmarks; for example, UI-Venus-72B attains a 73.1% success rate on ScreenSpot-Pro. Furthermore, we present GUIZoom-Bench, a benchmark for evaluating model adaptability to zoom, aiming to inspire future research on improving zoom for further training and test-time scaling in GUI grounding tasks.
Similar Papers
HyperClick: Advancing Reliable GUI Grounding via Uncertainty Calibration
CV and Pattern Recognition
Makes computers know when they can't do tasks.
WinClick: GUI Grounding with Multimodal Large Language Models
Computation and Language
Lets computers control any app using pictures.
MEGA-GUI: Multi-stage Enhanced Grounding Agents for GUI Elements
Artificial Intelligence
Helps computers understand screen instructions better.