Abstract: Neural Architecture Search (NAS) has become a promising paradigm for automatic architecture engineering. Previously proposed zero-cost proxies have a high correlation with the number of parameters. Hence, they always tend to choose the largest architecture. This selection bias makes it challenging to observe the intrinsic traits of the architectures. To address this issue, in this work, we observe zero-shot NAS from a new results-oriented viewpoint and propose several feature-based indicators. Specifically, we craft multiple mathematical indicators from the feature maps and design concrete ways to employ them as optimizations of the existing zero-cost proxies. These indicators are capable of reflecting the architecture quality and are fully independent of the data labels. We rigorously implement our method in Python and conduct comprehensive experiments on three popular benchmarks. The experimental results illustrate that our feature-based indicators are effective and present moderate to strong correlation with the test accuracy. Moreover, the optimization method can significantly promote the performance of the existing proxies and alleviate the selection bias. For instance, our optimized proxy achieves 0.15 higher correlation and over 36% less bias than the original method, with only 0.54 extra seconds to compute.
Loading