Graph Embedding for Neural Architecture Search with Input-Output InformationDownload PDF

Anonymous

30 Sept 2021 (modified: 05 May 2023)NeurIPS 2021 Workshop MetaLearn Blind SubmissionReaders: Everyone
Keywords: Neural Architecture Search, meta-features, graph variational autoencoder, semi-supervised learning
TL;DR: We extended an existing graph variational autoencoder for NAS to learn from input images and output features of a particular neural network in a semi-supervised manner.
Abstract: Graph representation learning has been used in neural architecture search, for example in performance prediction. Existing works focused mostly on neural graph similarity without considering functionally similar networks with different architectures. In this work, we adress this issue by using meta-information of input images and output features of a particular neural network. We extended the~arch2vec model, a graph variational autoencoder for neural architecture search, to learn from this novel kind of data. We demonstrate our approach on the NAS-Bench-101 search space and the CIFAR-10 dataset, and compare our model with the original arch2vec on a REINFORCE search task and a performance prediction task.
0 Replies

Loading