Keywords: Neural architecture search, Search space, Benchmark, Dataset
TL;DR: A benchmark for NAS on a macro search space that consists of 91k unique models with accuracy and latency measurements.
Abstract: Neural architecture search (NAS) has been successfully used to design numerous high-performance neural networks. However, NAS is typically compute-intensive, so most existing approaches restrict the search to decide the operations and topological structure of a single block only, then the same block is stacked repeatedly to form an end-to-end model. Although such an approach reduces the size of search space, recent studies show that a macro search space, which allows blocks in a model to be different, can lead to better performance. To provide a systematic study of the performance of NAS algorithms on a macro search space, we release Blox – a benchmark that consists of 91k unique models trained on the CIFAR-100 dataset. The dataset also includes runtime measurements of all the models on a diverse set of hardware platforms. We perform extensive experiments to compare existing algorithms that are well studied on cell-based search spaces, with the emerging blockwise approaches that aim to make NAS scalable to much larger macro search spaces. The Blox benchmark and code are available at https://github.com/SamsungLabs/blox.
Supplementary Material: pdf
Contribution Process Agreement: Yes
In Person Attendance: No
Dataset Url: https://github.com/SamsungLabs/blox
Dataset Embargo: The dataset will be released after obtaining the approval from the authors' organisations before the conference starts.
License: CC BY-NC
Author Statement: Yes
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2210.07271/code)