Autonomous Evaluation and Refinement of Digital Agents

Published: 10 Jul 2024, Last Modified: 26 Aug 2024COLMEveryoneRevisionsBibTeXCC BY 4.0
Research Area: LMs with tools and code
Keywords: Language Agent, Autonomous Refinement, Automatic Evaluation
TL;DR: We study the design and use of automated evaluation models to both evaluate and autonomously refine the performance of digital agents, with our experiments confirming their effectiveness.
Abstract: We show that domain-general automatic evaluators can significantly improve the performance of agents for web navigation and device control. We experiment with multiple evaluation models that trade off between inference cost, modularity of design, and accuracy. We validate the performance of these models in several popular benchmarks for digital agents, finding between 74.4 and 92.9% agreement with oracle evaluation metrics. Finally, we use these evaluators to improve the performance of existing agents via fine-tuning and inference-time guidance. Without any additional supervision, we improve state-of-the-art performance by 29% on the popular benchmark WebArena, and achieve around 75% relative improvement in device control settings. We release our code and data at [https://github.com/Berkeley-NLP/Agent-Eval-Refine](https://github.com/Berkeley-NLP/Agent-Eval-Refine)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 1179
Loading