AtomNAS: Fine-Grained End-to-End Neural Architecture Search

Sep 25, 2019 Blind Submission readers: everyone Show Bibtex
  • TL;DR: A new state-of-the-art on Imagenet for mobile setting
  • Abstract: Designing of search space is a critical problem for neural architecture search (NAS) algorithms. We propose a fine-grained search space comprised of atomic blocks, a minimal search unit much smaller than the ones used in recent NAS algorithms. This search space facilitates direct selection of channel numbers and kernel sizes in convolutions. In addition, we propose a resource-aware architecture search algorithm which dynamically selects atomic blocks during training. The algorithm is further accelerated by a dynamic network shrinkage technique. Instead of a search-and-retrain two-stage paradigm, our method can simultaneously search and train the target architecture in an end-to-end manner. Our method achieves state-of-the-art performance under several FLOPS configurations on ImageNet with a negligible searching cost. We open our entire codebase at:
  • Keywords: Neural Architecture Search, Image Classification
  • Code:
  • Original Pdf:  pdf
0 Replies