Graph Embedding for Neural Architecture Search with Input-Output InformationDownload PDF

25 Feb 2022 (modified: 05 May 2023)AutoML 2022 (Late-Breaking Workshop)Readers: Everyone
Abstract: Graph representation learning has been widely used in neural architecture search as a part of performance prediction models. Existing works focused mostly on neural graph similarity without considering functionally similar networks with different architectures. In this work, we adress this issue by using meta-information of input images and output features of a particular neural network. We extended the arch2vec model, a graph variational autoencoder for neural architecture search, to learn from this novel kind of data in a semi-supervised manner. We demonstrate our approach on the NAS-Bench-101 search space and the CIFAR-10 dataset, and compare our model with the original arch2vec on a REINFORCE search task and a performance prediction task. We also present a semi-supervised accuracy predictor, and we discuss the advantages of both variants. The results are competitive with the original model and show improved performance.
Keywords: Neural Architecture Search, meta-features, graph variational autoencoder, semi-supervised learning, performance prediction
One-sentence Summary: We extended an existing graph variational autoencoder for NAS to learn from input images and output features of a particular neural network in a semi-supervised manner.
Track: Main track
Reproducibility Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Reviewers: Gabriela Suchopárová,suchoparova@cs.cas.cz
CPU Hours: 0
GPU Hours: 0
TPU Hours: 0
Class Of Approaches: Meta-Learning, Representation learning
Datasets And Benchmarks: CIFAR-10, NAS-Bench-101
Code And Dataset Supplement: zip
Main Paper And Supplementary Material: pdf
0 Replies

Loading