Score: 1

MP-GUI: Modality Perception with MLLMs for GUI Understanding

Published: March 18, 2025 | arXiv ID: 2503.14021v1

By: Ziwei Wang , Weizhi Chen , Leyang Yang and more

Potential Business Impact:

Helps computers understand app screens like people do.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Graphical user interface (GUI) has become integral to modern society, making it crucial to be understood for human-centric systems. However, unlike natural images or documents, GUIs comprise artificially designed graphical elements arranged to convey specific semantic meanings. Current multi-modal large language models (MLLMs) already proficient in processing graphical and textual components suffer from hurdles in GUI understanding due to the lack of explicit spatial structure modeling. Moreover, obtaining high-quality spatial structure data is challenging due to privacy issues and noisy environments. To address these challenges, we present MP-GUI, a specially designed MLLM for GUI understanding. MP-GUI features three precisely specialized perceivers to extract graphical, textual, and spatial modalities from the screen as GUI-tailored visual clues, with spatial structure refinement strategy and adaptively combined via a fusion gate to meet the specific preferences of different GUI understanding tasks. To cope with the scarcity of training data, we also introduce a pipeline for automatically data collecting. Extensive experiments demonstrate that MP-GUI achieves impressive results on various GUI understanding tasks with limited data.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition