JAHS-Bench-201: A Foundation For Research On Joint Architecture And Hyperparameter SearchDownload PDF

Published: 17 Sept 2022, Last Modified: 23 May 2023NeurIPS 2022 Datasets and Benchmarks Readers: Everyone
Keywords: Joint Architecture and Hyperparameter Search, Neural Architecture Search, Hyperparameter Optimization, Surrogate Benchmark, Multi-fidelity, Multi-objective, Cost-aware
Abstract: The past few years have seen the development of many benchmarks for Neural Architecture Search (NAS), fueling rapid progress in NAS research. However, recent work, which shows that good hyperparameter settings can be more important than using the best architecture, calls for a shift in focus towards Joint Architecture and Hyperparameter Search (JAHS). Therefore, we present JAHS-Bench-201, the first collection of surrogate benchmarks for JAHS, built to also facilitate research on multi-objective, cost-aware and (multi) multi-fidelity optimization algorithms. To the best of our knowledge, JAHS-Bench-201 is based on the most extensive dataset of neural network performance data in the public domain. It is composed of approximately 161 million data points and 20 performance metrics for three deep learning tasks, while featuring a 14-dimensional search and fidelity space that extends the popular NAS-Bench-201 space. With JAHS-Bench-201, we hope to democratize research on JAHS and lower the barrier to entry of an extremely compute intensive field, e.g., by reducing the compute time to run a JAHS algorithm from 5 days to only a few seconds.
Author Statement: Yes
TL;DR: We present JAHS-Bench-201, the first collection of surrogate benchmarks for Joint Architecture and Hyperparameter Search, built to also facilitate research on multi-objective, cost-aware and (multi) multi-fidelity optimization algorithms.
URL: All relevant information on downloading and using our datasets and trained models can be accessed by following the instructions on our GitHub repository: https://github.com/automl/jahs_bench_201.
Dataset Url: https://github.com/automl/jahs_bench_201.
Supplementary Material: pdf
License: We release the code used to build our benchmark and perform our experiments under the MIT License (https://mit-license.org/), whereas we release data we created, including the performance metrics collected by us, the splits used to train, validate and test our surrogate models, and our surrogate models, under the CC BY 4.0 License (https://creativecommons.org/licenses/by/4.0/).
Contribution Process Agreement: Yes
In Person Attendance: Yes
19 Replies

Loading