Score: 1

LLM-Powered GUI Agents in Phone Automation: Surveying Progress and Prospects

Published: April 28, 2025 | arXiv ID: 2504.19838v2

By: Guangyi Liu , Pengxiang Zhao , Liang Liu and more

Potential Business Impact:

Makes phones understand and do what you say.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

With the rapid rise of large language models (LLMs), phone automation has undergone transformative changes. This paper systematically reviews LLM-driven phone GUI agents, highlighting their evolution from script-based automation to intelligent, adaptive systems. We first contextualize key challenges, (i) limited generality, (ii) high maintenance overhead, and (iii) weak intent comprehension, and show how LLMs address these issues through advanced language understanding, multimodal perception, and robust decision-making. We then propose a taxonomy covering fundamental agent frameworks (single-agent, multi-agent, plan-then-act), modeling approaches (prompt engineering, training-based), and essential datasets and benchmarks. Furthermore, we detail task-specific architectures, supervised fine-tuning, and reinforcement learning strategies that bridge user intent and GUI operations. Finally, we discuss open challenges such as dataset diversity, on-device deployment efficiency, user-centric adaptation, and security concerns, offering forward-looking insights into this rapidly evolving field. By providing a structured overview and identifying pressing research gaps, this paper serves as a definitive reference for researchers and practitioners seeking to harness LLMs in designing scalable, user-friendly phone GUI agents.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
39 pages

Category
Computer Science:
Human-Computer Interaction