Keywords: Computer-Use Agent, GUI Visual Grounding, Multimodal Large Language Model
Abstract: Graphical user interface (GUI) grounding, the ability to map natural language instructions to specific actions on graphical user interfaces, remains a critical bottleneck in computer use agent development.
Current benchmarks oversimplify grounding tasks as short referring expressions, failing to capture the complexity of real-world interactions that require software commonsense, layout understanding, and fine-grained manipulation capabilities.
To address these limitations, we introduce OSWorld-G, a comprehensive benchmark comprising 564 finely annotated samples across diverse task types including text matching, element recognition, layout understanding, and precise manipulation.
Additionally, we synthesize and release the largest computer use grounding dataset Jedi, which contains 4 million examples through multi-perspective decoupling of tasks.
Our multi-scale models trained on Jedi demonstrate its effectiveness by outperforming existing approaches on ScreenSpot-v2, ScreenSpot-Pro, and our OSWorld-G.
Furthermore, we demonstrate that improved grounding with Jedi directly enhances agentic capabilities of general foundation models on complex computer tasks with state-of-the-art performance, improving from 23% to 51% on OSWorld.
Through detailed ablation studies, we identify key factors contributing to grounding performance and verify that combining specialized data for different interface elements enables compositional generalization to novel interfaces.
All benchmark, data, checkpoints, and code are open-sourced and available at https://osworld-grounding.github.io.
Croissant File: zip
Dataset URL: https://huggingface.co/datasets/xlangai/Jedi
Code URL: https://github.com/xlang-ai/osworld-g
Primary Area: Datasets & Benchmarks for applications in language modeling and vision language modeling
Submission Number: 2561
Loading