GUITester: Enabling GUI Agents for Exploratory Defect Discovery
By: Yifei Gao , Jiang Wu , Xiaoyi Chen and more
Potential Business Impact:
Finds hidden bugs in computer programs automatically.
Exploratory GUI testing is essential for software quality but suffers from high manual costs. While Multi-modal Large Language Model (MLLM) agents excel in navigation, they fail to autonomously discover defects due to two core challenges: \textit{Goal-Oriented Masking}, where agents prioritize task completion over reporting anomalies, and \textit{Execution-Bias Attribution}, where system defects are misidentified as agent errors. To address these, we first introduce \textbf{GUITestBench}, the first interactive benchmark for this task, featuring 143 tasks across 26 defects. We then propose \textbf{GUITester}, a multi-agent framework that decouples navigation from verification via two modules: (i) a \textit{Planning-Execution Module (PEM)} that proactively probes for defects via embedded testing intents, and (ii) a \textit{Hierarchical Reflection Module (HRM)} that resolves attribution ambiguity through interaction history analysis. GUITester achieves an F1-score of 48.90\% (Pass@3) on GUITestBench, outperforming state-of-the-art baselines (33.35\%). Our work demonstrates the feasibility of autonomous exploratory testing and provides a robust foundation for future GUI quality assurance~\footnote{Our code is now available in~\href{https://github.com/ADaM-BJTU/GUITestBench}{https://github.com/ADaM-BJTU/GUITestBench}}.
Similar Papers
GUISpector: An MLLM Agent Framework for Automated Verification of Natural Language Requirements in GUI Prototypes
Software Engineering
Checks if computer screens match what people want.
GUI-explorer: Autonomous Exploration and Mining of Transition-aware Knowledge for GUI Agent
Artificial Intelligence
Helps computers learn apps without retraining.
AUTO-Explorer: Automated Data Collection for GUI Agent
Artificial Intelligence
Teaches computers to understand new apps quickly.