LayerNAS: Neural Architecture Search in Polynomial Complexity

01 May 2023 (modified: 12 Dec 2023)Submitted to NeurIPS 2023EveryoneRevisionsBibTeX
Keywords: AutoML, Neural Architecture Search, Model Optimization
TL;DR: LayerNAS formulates Multi-objective NAS as Combinatorial Optimization problem, and constrains the search complexity to polynomial.
Abstract:

Neural Architecture Search (NAS) has become a popular method for discovering effective model architectures, especially for target hardware. As such, NAS methods that find optimal architectures under constraints are essential. In our paper, we propose LayerNAS to address the challenge of multi-objective NAS by transforming it into a combinatorial optimization problem, which effectively constrains the search complexity to be polynomial.

LayerNAS rigorously derives its method from the fundamental assumption that modifications to previous layers have no impact on the subsequent layers. When dealing with search spaces containing $L$ layers that meet this requirement, the method performs layerwise-search for each layer, selecting from a set of search options $\mathbb{S}$. LayerNAS groups model candidates based on one objective, such as model size or latency, and searches for the optimal model based on another objective, thereby splitting the cost and reward elements of the search. This approach limits the search complexity to $ O(H \cdot |\mathbb{S}| \cdot L) $, where $H$ is a constant set in LayerNAS.

Our experiments show that LayerNAS is able to consistently discover superior models across a variety of search spaces in comparison to strong baselines, including search spaces derived from NATS-Bench, MobileNetV2 and MobileNetV3.

Supplementary Material: zip
Submission Number: 1225
Loading