Abstract: Visual agent models for automating human activities on Graphical User Interfaces (GUIs)
have emerged as a promising research direction, driven by advances in large Vision Language Models (VLMs). A critical challenge in
GUI automation is the precise grounding of
interface elements across diverse platforms. Existing vision-only GUI agents directly ground
elements from large and cluttered screenshots,
requiring them to process substantial irrelevant
information that compromises their accuracy.
In addition, these approaches typically employ
basic cross-entropy loss for learning grounding objectives, which fails to effectively capture grounding quality compared to established
object detection metrics like Intersection-overUnion (IoU). To address these issues, we introduce R-VLM, a novel GUI grounding approach that leverages zoomed-in region proposals for precise element localization. We also
propose an IoU-aware objective function that
facilitates model convergence toward high IoU
predictions. Our approach bridges the gap between VLMs and conventional object detection techniques, improving the state-of-the-art
grounding accuracy by 13% across diverse GUI
platforms on the GUI grounding benchmarks
ScreenSpot and AgentStudio. In addition, our
R-VLM approach shows 3.2-9.7% absolute accuracy improvements in GUI navigation tasks
on the AITW and Mind2Web benchmarks.
Loading