Keywords: Neural architecture search, invertible neural network, variational graph auto-encoder, generative model
TL;DR: We propose an invertible framework for NAS that maps between architectures and performance in both directions.
Abstract: Neural Architecture Search (NAS) aims to find high-performing models, with candidate evaluation often being the most expensive step. While NAS-Bench datasets facilitate the development of performance prediction models by providing benchmark results, most existing work focuses on improving predictor accuracy, with limited attention to search strategies and the selection of initial architectures used for training.
In this work, we reformulate NAS as an inverse problem of performance prediction by utilizing Invertible Neural Networks (INNs) to construct a bidirectional performance prediction model that maps architectures to performance and, inversely, maps performance targets back to architectures. Specifically, we train the performance predictor and the search strategy together, in an end-to-end manner.
We further propose a novel sampling strategy that selects promising initial architectures without requiring any candidate training.
Experiments show that InvertNAS outperforms state-of-the-art NAS methods on NAS-Bench-201, and NAS-Bench-NLP, and performs competitively on NAS-Bench-101 and NAS-Bench-301. These results demonstrate the effectiveness and query efficiency of our approach. We believe this inverse formulation provides a promising direction for future NAS research.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 23147
Loading