Abstract: Neural architecture search (NAS) aims to automatically design high-performance architectures of deep neural networks, which have shown great potential in various fields. However, the search process of NAS is computationally expensive since plenty of deep neural networks are trained to get the performance on GPUs. Performance predictors can directly estimate the performance of architectures without GPU-based training, thus can overcome this barrier. However, the construction of performance predictors requires labeling plenty of architectures sampled from the corresponding NAS search space, which is still prohibitively costly. In this paper, we propose a Domain Adaptive performance Predictor (DAP), which can construct a performance predictor based on the labeled architectures provided by existing benchmarks and then enable it to other search spaces via domain adaptive techniques. To achieve this, we first propose a domain-agnostic feature extraction method to refine the domain-invariant features of neural architectures. Then, we propose a novel embedding method to learn the shared representations of architecture operations. Experimental results demonstrate that DAP outperforms eight baselines upon six popular search spaces. Notably, we only require the search cost of 0.0002 GPU Days to find the architecture with 77.10% top-1 accuracy on ImageNet and 97.86% on CIFAR-10. In addition, we show the theoretical upper bound of the generalization error in the target search space, further illustrating the generalizability of DAP. The source code is available at https://github.com/songxt3/DAP.
External IDs:doi:10.1109/tc.2025.3624960
Loading