Abstract: Performance predictors are commonly dedicated to mitigating the substantial resource consumption of neural architecture search. Nevertheless, existing performance predictors are typically constructed based on the randomly sampled training data. Such a sampling method will not only lead to unnecessary computation budget caused by the sampled similar architectures, but also induce performance deterioration resulting from the poor spanning of search space. In this paper, we propose a contrastive learning-based sampling method to address the aforementioned issues. Specifically, we first encode the architectures as directed acyclic graphs, based on which a large number of architectures are augmented to learn invariant knowledge of architectures. After that, we maximize agreement based on augmented architectures to express similar architectures to analogous representations. Consequently, representative architectures are selected through clustering similar architectures to improve the spanning of the search space. We conduct extensive experiments on NAS-Bench-101 and NAS-Bench-201. The experimental results show that the proposed method can improve the predictive ability of performance predictors compared with the random sampling-based ones and can help search superior architectures when integrating with neural architecture search. In addition, an ablation study shows the effectiveness of contrastive learning and the clustering method used in the proposed sampling method.
Loading