AI Agents for Web Testing: A Case Study in the Wild

Published: 23 Sept 2025, Last Modified: 22 Nov 2025LAWEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI Agents, Web Testing, Visual Language Models, Usability Testing, Bug Detection, User Experience
TL;DR: We present WebProber, an AI agent framework leveraging VLLMs for automated web testing that discovers contextual usability bugs missed by traditional tools, identifying 29 real issues across 120 websites with human-like interaction capabilities.
Abstract: Automated web testing plays a critical role in ensuring high-quality user experiences and delivering business value. Traditional approaches primarily focus on code coverage and load testing, but often fall short of capturing complex user behaviors, leaving many usability issues undetected. The emergence of large language models (LLM) and AI agents opens new possibilities for web testing by enabling human-like interaction with websites and a general awareness of common usability problems. In this work, we present WebProber, a prototype AI agent-based web testing framework. Given a URL, WebProber autonomously explores the website, simulating real user interactions, identifying bugs and usability issues, and producing a human-readable report. We evaluate WebProber through a case study of 120 academic personal websites, where it uncovered 29 usability issues—many of which were missed by traditional tools. Our findings highlight agent-based testing as a promising direction while outlining directions for developing next-generation, user-centered testing frameworks.
Submission Type: Demo Paper (4-9 Pages)
Submission Number: 94
Loading