AtomNAS: Fine-Grained End-to-End Neural Architecture Search

Anonymous

Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex
  • TL;DR: A new state-of-the-art on Imagenet for mobile setting
  • Abstract: Designing of search space is a critical problem for neural architecture search (NAS) algorithms. We propose a fine-grained search space comprised of atomic blocks, a minimal search unit much smaller than the ones used in recent NAS algorithms. This search space facilitates direct selection of channel numbers and kernel sizes in convolutions. In addition, we propose a resource-constrained architecture search algorithm which dynamically selects atomic blocks during training. The algorithm is further accelerated by a dynamic network shrinkage technique. Instead of a search-and-retrain two-stage paradigm, our method can simultaneously search and train the target architecture in an end-to-end manner. Our method achieves state-of-the-art performance under several FLOPS configurations on ImageNet with a negligible searching cost.
  • Keywords: Neural Architecture Search, Image Classification
  • Code: https://anonymous.4open.science/r/ced78872-1992-43b9-ad69-2d611a14616d/
0 Replies

Loading