AndroidLab: Developing and Evaluating Android Agents in A Reproducible Environment

ACL ARR 2024 December Submission809 Authors

15 Dec 2024 (modified: 05 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Autonomous agents have become increasingly important for interacting with the real world. Android agents, in particular, have been recently a frequently-mentioned interaction method. However, existing studies for training and evaluating Android agents lack systematic research on both open-source and closed-source models. In this work, we propose \textsc{AndroidLab} as a systematic Android agent framework. It includes an operation environment with different modalities, action space, and a reproducible benchmark. It supports both large language models (LLMs) and multimodal models (LMMs) in the same action space. \textsc{AndroidLab} benchmark includes predefined Android virtual devices and 138 tasks across nine apps built on these devices. By using the \model environment, we develop an Android Instruction dataset and train six open-source LLMs and LMMs, lifting the average success rates from 5.07\% to 25.60\% for LLMs and from 1.69\% to 14.98\% for LMMs. \model is open-sourced and publicly available at \url{https://anonymous.4open.science/r/Android-Lab-Reivew-C93E}.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: NLP Applications, Resources and Evaluation
Contribution Types: Data resources
Languages Studied: English
Submission Number: 809
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview