NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly EasyDownload PDF

29 Sept 2021, 00:30 (modified: 14 Mar 2022, 19:34)ICLR 2022 PosterReaders: Everyone
Keywords: neural architecture search, AutoML
Abstract: The release of tabular benchmarks, such as NAS-Bench-101 and NAS-Bench-201, has significantly lowered the computational overhead for conducting scientific research in neural architecture search (NAS). Although they have been widely adopted and used to tune real-world NAS algorithms, these benchmarks are limited to small search spaces and focus solely on image classification. Recently, several new NAS benchmarks have been introduced that cover significantly larger search spaces over a wide range of tasks, including object detection, speech recognition, and natural language processing. However, substantial differences among these NAS benchmarks have so far prevented their widespread adoption, limiting researchers to using just a few benchmarks. In this work, we present an in-depth analysis of popular NAS algorithms and performance prediction methods across 25 different combinations of search spaces and datasets, finding that many conclusions drawn from a few NAS benchmarks do \emph{not} generalize to other benchmarks. To help remedy this problem, we introduce \nasbs, a comprehensive and extensible collection of NAS benchmarks, accessible through a unified interface, created with the aim to facilitate reproducible, generalizable, and rapid NAS research. Our code is available at https://github.com/automl/naslib.
One-sentence Summary: We show that you cannot get away only with NAS-Bench-101 and -201; to fix this, we release a unified NAS benchmark suite with 25 benchmarks.
12 Replies

Loading