MP-GUI: Modality Perception with MLLMs for GUI Understanding
By: Ziwei Wang , Weizhi Chen , Leyang Yang and more
Potential Business Impact:
Helps computers understand app screens like people do.
Graphical user interface (GUI) has become integral to modern society, making it crucial to be understood for human-centric systems. However, unlike natural images or documents, GUIs comprise artificially designed graphical elements arranged to convey specific semantic meanings. Current multi-modal large language models (MLLMs) already proficient in processing graphical and textual components suffer from hurdles in GUI understanding due to the lack of explicit spatial structure modeling. Moreover, obtaining high-quality spatial structure data is challenging due to privacy issues and noisy environments. To address these challenges, we present MP-GUI, a specially designed MLLM for GUI understanding. MP-GUI features three precisely specialized perceivers to extract graphical, textual, and spatial modalities from the screen as GUI-tailored visual clues, with spatial structure refinement strategy and adaptively combined via a fusion gate to meet the specific preferences of different GUI understanding tasks. To cope with the scarcity of training data, we also introduce a pipeline for automatically data collecting. Extensive experiments demonstrate that MP-GUI achieves impressive results on various GUI understanding tasks with limited data.
Similar Papers
A Survey on (M)LLM-Based GUI Agents
Human-Computer Interaction
Computers learn to do tasks on screens by themselves.
Structuring GUI Elements through Vision Language Models: Towards Action Space Generation
CV and Pattern Recognition
Helps computers understand on-screen buttons and menus.
Structuring GUI Elements through Vision Language Models: Towards Action Space Generation
CV and Pattern Recognition
Helps computers understand screen pictures better.