AMLB: an AutoML Benchmark

Published: 10 Jul 2024, Last Modified: 29 Aug 2024AutoML 2024 Journal TrackEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Comparing different AutoML frameworks is notoriously challenging and often done incorrectly. We introduce an open and extensible benchmark that follows best practices and avoids common mistakes when comparing AutoML frameworks. We conduct a thorough comparison of 9 well-known AutoML frameworks across 71 classification and 33 regression tasks. The differences between the AutoML frameworks are explored with a multi-faceted analysis, evaluating model accuracy, its trade-offs with inference time, and framework failures. We also use Bradley-Terry trees to discover subsets of tasks where the relative AutoML framework rankings differ. The benchmark comes with an open-source tool that integrates with many AutoML frameworks and automates the empirical evaluation process end-to-end: from framework installation and resource allocation to in-depth evaluation. The benchmark uses public data sets, can be easily extended with other AutoML frameworks and tasks, and has a website with up-to-date results.
Paper Is In Scope: Yes
Paper Has Not Been Presented At A Conference: Yes
Submission Checklist And Broader Impact Statement: pdf
Submission Checklist: Yes
Broader Impact Statement: Yes
Link To Paper: https://www.jmlr.org/papers/v25/22-0493.html
Link To Code: https://github.com/openml/automlbenchmark
Note On License: The license below, CC-BY-4.0, pertains only to the uploaded PDF. The journal paper's license is not affected.
Submission Number: 5
Loading