TongUI: Building Generalized GUI Agents by Learning from Multimodal Web Tutorials
By: Bofei Zhang , Zirui Shang , Zhi Gao and more
Potential Business Impact:
Teaches computers to do tasks by watching videos.
Building Graphical User Interface (GUI) agents is a promising research direction, which simulates human interaction with computers or mobile phones to perform diverse GUI tasks. However, a major challenge in developing generalized GUI agents is the lack of sufficient trajectory data across various operating systems and applications, mainly due to the high cost of manual annotations. In this paper, we propose the TongUI framework that builds generalized GUI agents by learning from rich multimodal web tutorials. Concretely, we crawl and process online GUI tutorials (such as videos and articles) into GUI agent trajectory data, through which we produce the GUI-Net dataset containing 143K trajectory data across five operating systems and more than 200 applications. We develop the TongUI agent by fine-tuning Qwen2.5-VL-3B/7B models on GUI-Net, which show remarkable performance improvements on commonly used grounding and navigation benchmarks, outperforming baseline agents about 10\% on multiple benchmarks, showing the effectiveness of the GUI-Net dataset and underscoring the significance of our TongUI framework. We will fully open-source the code, the GUI-Net dataset, and the trained models soon.
Similar Papers
Breaking the Data Barrier -- Building GUI Agents Through Task Generalization
Artificial Intelligence
Teaches computers to do computer jobs better.
UItron: Foundational GUI Agent with Advanced Perception and Planning
CV and Pattern Recognition
Helps computers control phones and computers automatically.
UITron-Speech: Towards Automated GUI Agents Based on Speech Instructions
Computation and Language
Lets computers control apps using your voice.