Deeper Insights into Weight Sharing in Neural Architecture SearchDownload PDF

25 Sept 2019 (modified: 22 Oct 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: A comprehensive study of the impact of weight-sharing in Neural Architecture Search
Abstract: With the success of deep neural networks, Neural Architecture Search (NAS) as a way of automatic model design has attracted wide attention. As training every child model from scratch is very time-consuming, recent works leverage weight-sharing to speed up the model evaluation procedure. These approaches greatly reduce computation by maintaining a single copy of weights on the super-net and share the weights among every child model. However, weight-sharing has no theoretical guarantee and its impact has not been well studied before. In this paper, we conduct comprehensive experiments to reveal the impact of weight-sharing: (1) The best-performing models from different runs or even from consecutive epochs within the same run have significant variance; (2) Even with high variance, we can extract valuable information from training the super-net with shared weights; (3) The interference between child models is a main factor that induces high variance; (4) Properly reducing the degree of weight sharing could effectively reduce variance and improve performance.
Code: https://drive.google.com/file/d/13v81qfUCr0vz_rKNqqR0QJHcumvOk2Jj/view?usp=sharing
Keywords: Neural Architecture Search, NAS, AutoML, AutoDL, Deep Learning, Machine Learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2001.01431/code)
Original Pdf: pdf
7 Replies

Loading