Toward Autonomous UI Exploration: The UIExplorer Benchmark

Published: 08 Jun 2025, Last Modified: 27 Jun 2025WCUA 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Submission Track: Paper Track (up to 8 pages)
Keywords: exploration, computer-use agents, llm agents, agents, UI, benchmark
TL;DR: We introduce UIExplorer, the first benchmark for systematic UI exploration, propose a new evaluation metric and dataset, and show that our algorithm outperforms baselines across structured and screen-based modes.
Abstract: Autonomous agents must know how to explore user interfaces (UIs) for reliable task solving, yet systematic evaluation of this crucial phase is lacking. We introduce UIExplore-Bench, the first benchmark explicitly dedicated to UI exploration. The benchmark evaluates agents with either Structured mode (granting access to layout information like DOM trees) or Screen mode (relying on GUI-only observations such as screenshots and human-like mouse/keyboard interactions) across three levels in a standardized GitLab sandbox environment. We formalize exploration as the process of maximizing the set of actionable UI components discovered and propose a metric, human-normalized UI‑Functionalities Observed (hUFO), to quantify the effectiveness of exploration. Our results show that UIExplore-AlGo achieves the leading mean hUFO scores, reaching up to 77.2% of human performance in Structured mode and 59.0% in Screen mode at 2,000 steps, particularly excelling at the Sparse level. The results highlight the relevance of our benchmark, as current agents show a substantial performance gap compared to 1 hour of human expert exploration, indicating ample room for future advancements. We publicly release the benchmark environment, an exploration dataset, and an evaluation suite to catalyze research into efficient UI exploration strategies and their downstream applications, such as experience-driven task completion and automated training data generation.
Submission Number: 23
Loading