A Survey on GUI Agents with Foundation Models Enhanced by Reinforcement Learning
By: Jiahao Li, Kaer Huang
Potential Business Impact:
Helps computers understand and use apps like people.
Graphical User Interface (GUI) agents, driven by Multi-modal Large Language Models (MLLMs), have emerged as a promising paradigm for enabling intelligent interaction with digital systems. This paper provides a structured survey of recent advances in GUI agents, focusing on architectures enhanced by Reinforcement Learning (RL). We first formalize GUI agent tasks as Markov Decision Processes and discuss typical execution environments and evaluation metrics. We then review the modular architecture of (M)LLM-based GUI agents, covering Perception, Planning, and Acting modules, and trace their evolution through representative works. Furthermore, we categorize GUI agent training methodologies into Prompt-based, Supervised Fine-Tuning (SFT)-based, and RL-based approaches, highlighting the progression from simple prompt engineering to dynamic policy learning via RL. Our summary illustrates how recent innovations in multimodal perception, decision reasoning, and adaptive action generation have significantly improved the generalization and robustness of GUI agents in complex real-world environments. We conclude by identifying key challenges and future directions for building more capable and reliable GUI agents.
Similar Papers
A Survey on (M)LLM-Based GUI Agents
Human-Computer Interaction
Computers learn to do tasks on screens by themselves.
Enhancing Visual Grounding for GUI Agents via Self-Evolutionary Reinforcement Learning
Artificial Intelligence
Helps computers understand and click on screen buttons.
LLM-Powered GUI Agents in Phone Automation: Surveying Progress and Prospects
Human-Computer Interaction
Makes phones understand and do what you say.