R-VLM: Region-Aware Vision Language Model for Precise GUI Grounding
By: Joonhyung Park , Peng Tang , Sagnik Das and more
Potential Business Impact:
Helps computers understand and click on screen buttons.
Visual agent models for automating human activities on Graphical User Interfaces (GUIs) have emerged as a promising research direction, driven by advances in large Vision Language Models (VLMs). A critical challenge in GUI automation is the precise grounding of interface elements across diverse platforms. Existing vision-only GUI agents directly ground elements from large and cluttered screenshots, requiring them to process substantial irrelevant information that compromises their accuracy. In addition, these approaches typically employ basic cross-entropy loss for learning grounding objectives, which fails to effectively capture grounding quality compared to established object detection metrics like Intersection-over-Union (IoU). To address these issues, we introduce R-VLM, a novel GUI grounding approach that leverages zoomed-in region proposals for precise element localization. We also propose an IoU-aware objective function that facilitates model convergence toward high IoU predictions. Our approach bridges the gap between VLMs and conventional object detection techniques, improving the state-of-the-art grounding accuracy by 13% across diverse GUI platforms on the GUI grounding benchmarks ScreenSpot and AgentStudio. In addition, our R-VLM approach shows 3.2-9.7% absolute accuracy improvements in GUI navigation tasks on the AITW and Mind2Web benchmarks.
Similar Papers
GUI-Actor: Coordinate-Free Visual Grounding for GUI Agents
Computation and Language
Helps computers understand where to click on screens.
How Auxiliary Reasoning Unleashes GUI Grounding in VLMs
CV and Pattern Recognition
Helps computers understand where things are on screens.
VLM-R$^3$: Region Recognition, Reasoning, and Refinement for Enhanced Multimodal Chain-of-Thought
CV and Pattern Recognition
Helps computers understand pictures to answer questions.