Score: 1

How Auxiliary Reasoning Unleashes GUI Grounding in VLMs

Published: September 15, 2025 | arXiv ID: 2509.11548v1

By: Weiming Li , Yan Shao , Jing Yang and more

Potential Business Impact:

Helps computers understand where things are on screens.

Business Areas:
Visual Search Internet Services

Graphical user interface (GUI) grounding is a fundamental task for building GUI agents. However, general vision-language models (VLMs) struggle with this task due to a lack of specific optimization. We identify a key gap in this paper: while VLMs exhibit significant latent grounding potential, as demonstrated by their performance measured by Pointing Game, they underperform when tasked with outputting explicit coordinates. To address this discrepancy, and bypass the high data and annotation costs of current fine-tuning approaches, we propose three zero-shot auxiliary reasoning methods. By providing explicit spatial cues such as axes, grids and labeled intersections as part of the input image, these methods enable VLMs to articulate their implicit spatial understanding capabilities. We evaluate these methods on four GUI grounding benchmarks across seven open-source and proprietary VLMs. The evaluation results demonstrate that the proposed methods substantially improve the performance of GUI grounding.

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
CV and Pattern Recognition