Towards Discovering Neural Architectures from ScratchDownload PDF

Published: 21 Oct 2022, Last Modified: 01 Sept 2024NeurIPS 2022 Workshop MetaLearn PosterReaders: Everyone
Keywords: Neural Architecture Search, Search Space Design, Bayesian Optimization
TL;DR: We introduce an algebraic view on Neural Architecture Search that allows us to construct highly expressive search spaces with context-free grammars, and show that we can efficiently find well-performing architectures.
Abstract: The discovery of neural architectures from scratch is the long-standing goal of Neural Architecture Search (NAS). Searching over a wide spectrum of neural architectures can facilitate the discovery of previously unconsidered but well-performing architectures. In this work, we take a large step towards discovering neural architectures from scratch by expressing architectures algebraically. This algebraic view leads to a more general method for designing search spaces, which allows us to compactly represent search spaces that are 100s of orders of magnitude larger than common spaces from the literature. Further, we propose a Bayesian Optimization strategy to efficiently search over such huge spaces, and demonstrate empirically that both our search space design and our search strategy can be superior to existing baselines. We open source our algebraic NAS approach and provide APIs for PyTorch and TensorFlow at https://github.com/automl/towards_nas_from_scratch.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/towards-discovering-neural-architectures-from/code)
0 Replies

Loading