AutoEval: Autonomous Evaluation of Generalist Robot Manipulation Policies in the Real World

Published: 18 Apr 2025, Last Modified: 07 May 2025ICRA 2025 FMNS PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Policy Evaluation, Generalist Robot Policies, Robot Learning, Robot Manipulation
TL;DR: AutoEval is a system that autonomously evaluates generalist robot policies around the clock with learned success classification and scene resets, and correlates closely with ground truth human eval results while saving >99% human effort.
Abstract: Scalable and reproducible policy evaluation has been a long-standing challenge in robot learning. Evaluations are critical to assess progress and build better policies, but evaluation in the real world, especially at a scale that would provide statistically reliable results, is costly in terms of human time and hard to obtain. Evaluation of increasingly generalist robot policies requires an increasingly diverse repertoire of evaluation environments, making the evaluation bottleneck even more pronounced. To make real-world evaluation of robotic policies more practical, we propose AutoEval, a system to autonomously evaluate generalist robot policies around the clock with minimal human intervention. Users interact with AutoEval by submitting evaluation jobs to the AutoEval queue, much like how software jobs are submitted with a cluster scheduling system, and AutoEval will schedule the policies for evaluation within a framework supplying automatic success detection and automatic scene resets. We show that AutoEval can nearly fully eliminate human involvement in the evaluation process, permitting around the clock evaluations, and the evaluation results correspond closely to ground truth evaluations conducted by hand. To facilitate the evaluation of generalist policies in the robotics community, we provide public access (https:auto-eval.github.io) to multiple AutoEval scenes in the popular BridgeData robot setup with WidowX robot arms. In the future, we hope that AutoEval scenes can be set up across institutions to form a diverse and distributed evaluation network.
Supplementary Material: zip
Submission Number: 12
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview