Graph Masked Autoencoder Enhanced Predictor for Neural Architecture Search

Published: 01 Jan 2022, Last Modified: 13 Nov 2024IJCAI 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Performance estimation of neural architecture is a crucial component of neural architecture search (NAS). Meanwhile, neural predictor is a current mainstream performance estimation method. However, it is a challenging task to train the predictor with few architecture evaluations for efficient NAS. In this paper, we propose a graph masked autoencoder (GMAE) enhanced predictor, which can reduce the dependence on supervision data by self-supervised pre-training with untrained architectures. We compare our GMAE-enhanced predictor with existing predictors in different search spaces, and experimental results show that our predictor has high query utilization. Moreover, GMAE-enhanced predictor with different search strategies can discover competitive architectures in different search spaces. Code and supplementary materials are available at https://github.com/kunjing96/GMAENAS.git.
Loading