Personalized Benchmarking with the Ludwig Benchmarking ToolkitDownload PDF

Published: 29 Jul 2021, Last Modified: 08 Sept 2024NeurIPS 2021 Datasets and Benchmarks Track (Round 1)Readers: Everyone
Keywords: benchmarking, benchmarking tools, benchmarking toolkits, model benchmarking, benchmarks
TL;DR: This work introduces the Ludwig Benchmarking Toolkit (LBT): an extensible toolkit for creating personalized model benchmark studies across a wide range of machine learning tasks, deep learning models, and datasets.
Abstract: The rapid proliferation of machine learning models across domains and deployment settings has given rise to various communities (e.g. industry practitioners) which seek to benchmark models across tasks and objectives of personal value. Unfortunately, these users cannot use standard benchmark results to perform such value-driven comparisons as traditional benchmarks evaluate models on a single objective (e.g. average accuracy) and fail to facilitate a standardized training framework that controls for confounding variables (e.g. computational budget), making fair comparisons difficult. To address these challenges, we introduce the open-source Ludwig Benchmarking Toolkit (LBT), a personalized benchmarking toolkit for running end-to-end benchmark studies (from hyperparameter optimization to evaluation) across an easily extensible set of tasks, deep learning models, datasets and evaluation metrics. LBT provides a configurable interface for controlling training and customizing evaluation, a standardized training framework for eliminating confounding variables, and support for multi-objective evaluation. We demonstrate how LBT can be used to create personalized benchmark studies with a large-scale comparative analysis for text classification across 7 models and 9 datasets. We explore the trade-offs between inference latency and performance, relationships between dataset attributes and performance, and the effects of pretraining on convergence and robustness, showing how LBT can be used to satisfy various benchmarking objectives.
Supplementary Material: zip
URL: https://github.com/HazyResearch/ludwig-benchmarking-toolkit
Contribution Process Agreement: Yes
Dataset Url: https://github.com/hazyresearch/ludwig-benchmarking-toolkit
License: Apache License 2.0
Author Statement: Yes
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/personalized-benchmarking-with-the-ludwig/code)
13 Replies

Loading