Keywords: Neural Architecture Search, NAS, AutoML, Computer Vision
TL;DR: Scalable multi-agent formulation of neural architecture search
Abstract: The Neural Architecture Search (NAS) problem is typically formulated as a graph search problem where the goal is to learn the optimal operations over edges in order to maximize a graph-level global objective. Due to the large architecture parameter space, efficiency is a key bottleneck preventing NAS from its practical use. In this paper, we address the issue by framing NAS as a multi-agent problem where agents control a subset of the network and coordinate to reach optimal architectures. We provide two distinct lightweight implementations, with reduced memory requirements ($1/8$th of state-of-the-art), and performances above those of much more computationally expensive methods.
Theoretically, we demonstrate vanishing regrets of the form $\mathcal{O}(\sqrt{T})$, with $T$ being the total number of rounds.
Finally, aware that random search is an (often ignored) effective baseline we perform additional experiments on $3$ alternative datasets and $2$ network configurations, and achieve favorable results in comparison with this baseline and other competing methods.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:1909.01051/code)
Original Pdf: pdf
8 Replies
Loading