Syne Tune: A Library for Large Scale Hyperparameter Tuning and Reproducible ResearchDownload PDF

Published: 16 May 2022, Last Modified: 05 May 2023AutoML-Conf 2022 (Main Track)Readers: Everyone
Abstract: We present Syne Tune, a library for large-scale distributed hyperparameter optimization (HPO). Syne Tune's modular architecture allows users to easily switch between different execution backends to facilitate experimentation and makes it easy to contribute new optimization algorithms. To foster reproducible benchmarking, Syne Tune provides an efficient simulator backend and a benchmarking suite, which are essential for large-scale evaluations of distributed asynchronous HPO algorithms on tabulated and surrogate benchmarks. We showcase these functionalities with a range of state-of-the-art gradient-free optimizers, including multi-fidelity and transfer learning approaches on popular benchmarks from the literature. Additionally, we demonstrate the benefits of Syne Tunefor constrained and multi-objective HPO applications through two use cases: the former considers hyperparameters that induce fair solutions and the latter automatically selects machine types along with the conventional hyperparameters.
Keywords: hyperparameter optimization, multi-fidelity optimization, transfer-learning, multi-objective optimization
One-sentence Summary: We present a new HPO library built-in for both large scale tuning and reproducible research.
Track: Special track for systems, benchmarks and challenges
Reproducibility Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
CPU Hours: 500
GPU Hours: 12
TPU Hours: 0
Evaluation Metrics: Yes
Class Of Approaches: Bayesian Optimization, Evolutionary Methods, Multifidelity optimization, Multi-objective optimization, Constrained Optimization
Datasets And Benchmarks: NAS-BENCH-201, FCNET, LCBench.
Main Paper And Supplementary Material: pdf
Steps For Environmental Footprint Reduction During Development: We improve the performance of simulating HPO methods to fasten our benchmarks.
7 Replies

Loading