Machine Learning Automation Toolbox (MLAUT)Download PDF

29 Oct 2018 (modified: 05 May 2023)NIPS 2018 Workshop MLOSS Paper15 DecisionReaders: Everyone
Keywords: benchmarking, ml experiments
TL;DR: large-scale evaluation and benchmarking of machine learning algorithms
Abstract: In this paper we present MLAUT (Machine Learning AUtomation Toolbox) for the python data science ecosystem. MLAUT automates large-scale evaluation and benchmarking of machine learning algorithms on a large number of datasets. MLAUT provides a high-level workflow interface to machine algorithm algorithms, implements a local back-end to a database of dataset collections, trained algorithms, and experimental results, and provides easy-to-use interfaces to the scikit-learn and keras modelling libraries. Experiments are easy to set up with default settings in a few lines of code, while remaining fully customizable to the level of hyper-parameter tuning, pipeline composition, or deep learning architecture. This is a short (``extended abstract'') version, abridged for the NIPS submission prior to the public release of MLAUT, of a longer manuscript which also includes: full mathematical background and description of implemented post-hoc analyses, a detailed overview of the package design, and results of a large-scale benchmarking study conducted with MLAUT. A demo can be found on Github: https://github.com/ViktorKaz/NIPS_2018
Decision: reject
0 Replies

Loading