Keywords: Efficient Neural Architecture Search, AutoML
Abstract: Neural Architecture Search (NAS) has long been an important research direction, to replace labor-intensive manual architecture search.
Since the introduction of weight sharing in NAS, the resource and time consumption of architecture searches has been significantly reduced.
In addition, variants of NAS methods have been proposed that eliminate the need for retraining by inferring model parameters directly from the shared weights after the search.
However, these methods are mainly based on the MobileNet search space, which is primarily used for size searches.
For the important topology search space, no NAS method has been proposed that does not require retraining.
In this work, we fill this gap by proposing a NAS method that does not require retraining based on the topology search space.
Our method combines the advantages of previously proposed Hypernetwork and Kshot-NAS.
We also propose a new distillation and sampling method for this new NAS architecture.
We present results on NAS-Bench-201 and show that our method matches or even exceeds the baseline performance of post-search retraining.
Submission Number: 28
Loading