A Transferable General-Purpose Predictor for Neural Architecture SearchDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Neural Architecture Search, Performance Estimation, Neural Predictor
Abstract: Understanding and modelling the performance of neural architectures is key to Neural Architecture Search (NAS). Performance predictors for neural architectures are widely used in low-cost NAS and achieve high ranking correlations between predicted and ground truth performance in several search spaces. However, existing predictors are often designed based on network encodings specific to a predefined search space and are not generalizable across search spaces or to new families of architectures. In this work, we propose a transferable neural predictor for NAS that can generalize across architecture families, by representing any given candidate Convolutional Neural Network with a computation graph that consists of only primitive operators. Further combined with Contrastive Learning, we propose a semi-supervised graph representation learning procedure that is able to leverage both labelled accuracies and unlabeled information of architectures from multiple families to train universal embeddings of computation graphs and the performance predictor. Experiments conducted on three different NAS benchmarks, including NAS-Bench-101, NAS-Bench-201, and NAS-Bench-301, demonstrate that a predictor pre-trained on other families produces superior transferability when applied to a new family of architectures with a completely different design, after fine-tuning on a small amount of data. We then show that when the proposed transferable predictor is used in NAS, it achieves search results that are comparable to the state-of-the-arts on NAS-Bench-101 at a low evaluation cost.
One-sentence Summary: A novel, performant, transferable performance predictor for Neural Architecture Search
5 Replies

Loading