Assembled-OpenML: Creating Efficient Benchmarks for Ensembles in AutoML with OpenMLDownload PDF

Published: 16 May 2022, Last Modified: 03 Nov 2024AutoML 2022 (Late-Breaking Workshop)Readers: Everyone
Abstract: Automated Machine Learning (AutoML) frameworks regularly use ensembles. Developers need to compare different ensemble techniques to select appropriate techniques for an AutoML framework from the many potential techniques. So far, the comparison of ensemble techniques is often computationally expensive, because many base models must be trained and evaluated one or multiple times. Therefore, we present Assembled-OpenML. Assembled-OpenML is a Python tool, which builds meta-datasets for ensembles using OpenML. A meta-dataset, called Metatask, consists of the data of an OpenML task, the task's dataset, and prediction data from model evaluations for the task. We can make the comparison of ensemble techniques computationally cheaper by using the predictions stored in a metatask instead of training and evaluating base models. To introduce Assembled-OpenML, we describe the first version of our tool. Moreover, we present an example of using Assembled-OpenML to compare a set of ensemble techniques. For this example comparison, we built a benchmark using Assembled-OpenML and implemented ensemble techniques expecting predictions instead of base models as input. In our example comparison, we gathered the prediction data of $1523$ base models for $31$ datasets. Obtaining the prediction data for all base models using Assembled-OpenML took ${\sim} 1$ hour in total. In comparison, obtaining the prediction data by training and evaluating just one base model on the most computationally expensive dataset took ${\sim} 37$ minutes.
Keywords: Ensembles, AutoML, OpenML, Benchmarks, Tool
One-sentence Summary: A Python tool (Assembled-OpenML) that fetches data from OpenML to build meta-datasets, which include prediction data, usable to benchmark ensemble techniques.
Reproducibility Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Reviewers: Lennart Purucker,lennart.purucker@uni-siegen.de
Main Paper And Supplementary Material: pdf
Code And Dataset Supplement: https://anonymous.4open.science/r/assembled-openml-3157
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 6 code implementations](https://www.catalyzex.com/paper/assembled-openml-creating-efficient/code)
1 Reply

Loading