AI Agents for Web Testing: A Case Study in the Wild
By: Naimeng Ye , Xiao Yu , Ruize Xu and more
Potential Business Impact:
Finds website problems like a real person.
Automated web testing plays a critical role in ensuring high-quality user experiences and delivering business value. Traditional approaches primarily focus on code coverage and load testing, but often fall short of capturing complex user behaviors, leaving many usability issues undetected. The emergence of large language models (LLM) and AI agents opens new possibilities for web testing by enabling human-like interaction with websites and a general awareness of common usability problems. In this work, we present WebProber, a prototype AI agent-based web testing framework. Given a URL, WebProber autonomously explores the website, simulating real user interactions, identifying bugs and usability issues, and producing a human-readable report. We evaluate WebProber through a case study of 120 academic personal websites, where it uncovered 29 usability issues--many of which were missed by traditional tools. Our findings highlight agent-based testing as a promising direction while outlining directions for developing next-generation, user-centered testing frameworks.
Similar Papers
Build the web for agents, not agents for the web
Machine Learning (CS)
Builds websites for AI to use easily.
LegalWebAgent: Empowering Access to Justice via LLM-Based Web Agents
Computers and Society
Helps people understand and use legal websites easily.
An Illusion of Progress? Assessing the Current State of Web Agents
Artificial Intelligence
Tests AI's web skills, finds them weaker.