Abstract: Mobile GUI agents have attracted tremendous research participation recently.
Traditional approaches to mobile agent training rely on centralized data collection, leading to high cost and limited scalability.
Distributed training utilizing federated learning offers an alternative by harnessing real-world user data, providing scalability and reducing costs.
However, pivotal challenges, including the absence of standardized benchmarks, hinder progress in this field.
To tackle the challenges, we introduce FedMABench, the first benchmark for federated training and evaluation of mobile GUI agents, specifically designed for heterogeneous scenarios. FedMABench features 6 datasets with 30+ subsets, 8 federated algorithms, 10+ base models, and over 800 apps across 5 categories, providing a comprehensive framework for evaluating mobile agents across diverse environments.
Through extensive experiments, we uncover several key insights: federated algorithms consistently outperform local training; the distribution of specific apps plays a crucial role in heterogeneity; and, even apps from distinct categories can exhibit correlations during training.
FedMABench is publicly available at: https://anonymous.4open.science/r/FedMABench.
Paper Type: Long
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Research Area Keywords: GUI Agent, Mobile Agent, User Heterogeneity, Federated Learning
Contribution Types: Reproduction study, Publicly available software and/or pre-trained models, Data resources
Languages Studied: English
Submission Number: 1388
Loading