Exploring single-path Architecture Search ranking correlationsDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: Neural Architecture Search, AutoML, Neural Networks
Abstract: Recently presented benchmarks for Neural Architecture Search (NAS) provide the results of training thousands of different architectures in a specific search space, thus enabling the fair and rapid comparison of different methods. Based on these results, we quantify the ranking correlations of single-path architecture search methods in different search space subsets and under several training variations; studying their impact on the expected search results. The experiments support the few-shot approach and Linear Transformers, provide evidence against disabling cell topology sharing during the training phase or using strong regularization in the NAS-Bench-201 search space, and show the necessity of further research regarding super-network size and path sampling strategies.
One-sentence Summary: An empirical study of how several method variations affect the quality of the architecture ranking prediction.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=ajZ1hA570j
19 Replies