Score: 0

MobileGUI-RL: Advancing Mobile GUI Agent through Reinforcement Learning in Online Environment

Published: July 8, 2025 | arXiv ID: 2507.05720v1

By: Yucheng Shi , Wenhao Yu , Zaitang Li and more

Potential Business Impact:

Teaches phones to do tasks by watching.

Business Areas:
Autonomous Vehicles Transportation

Recently, there has been a surge of vision-based GUI agents designed to automate everyday mobile and web tasks. These agents interpret raw GUI screenshots and autonomously decide where to click, scroll, or type, which bypasses handcrafted rules and app-specific APIs. However, most existing methods trained GUI agent in the offline environment using pre-collected trajectories. This approach limits scalability, causes overfitting to specific UI templates, and leads to brittle policies when faced with unseen environment. We present MobileGUI-RL, a scalable framework that trains GUI agent in online environment. MobileGUI-RL contains two key components. It (i) synthesizes a curriculum of learnable tasks through self-exploration and filtering, and (ii) adapts GRPO to GUI navigation with trajectory-aware advantages and composite rewards that balance task success and execution efficiency. Experiments on three online mobile-agent benchmarks show consistent gains, validating the effectiveness of our approach.

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)