AutoTransfer: AutoML with Knowledge Transfer - An Application to Graph Neural NetworksDownload PDF

Published: 22 Nov 2022, Last Modified: 05 May 2023NeurIPS 2022 GLFrontiers WorkshopReaders: Everyone
Keywords: Graph Neural Networks, AutoML, Knowledge Transfer
TL;DR: We propose AutoTransfer, an AutoML solution that improves search efficiency by transferring the known architectural design knowledge to the novel task of interest.
Abstract: AutoML has demonstrated remarkable success in finding an effective neural architecture for a given machine learning task defined by a specific dataset and an evaluation metric. However, most present AutoML techniques consider each task independently from scratch, which leads to many explored architectures and high computational cost. Here we propose AutoTransfer, an AutoML solution that improves search efficiency by transferring the known architectural design knowledge to the novel task of interest. Our key insights are a task-model bank that captures the training performance over a diverse set of GNN architectures and tasks, and a computationally efficient task embedding that can accurately measure the similarity between different tasks. Based on the task-model bank and the task embeddings, we estimate the design priors of desirable models of the novel task, by aggregating a similarity-weighted sum of the top-K design distributions on tasks that are similar to the task of interest. The computed design priors can be used with any AutoML search algorithm. We evaluate AutoTransfer on six datasets in the graph machine learning domain. Experiments demonstrate that (i) our proposed task embedding can be computed efficiently, and that tasks with similar embeddings have similar best-performing architectures; (ii) AutoTransfer significantly improves search efficiency with the transferred design priors, reducing the number of explored architectures by an order of magnitude. Finally, we release GNN-Bank-101, a large-scale dataset of detailed GNN training information of 120,000 task-model combinations to facilitate and inspire future research.
1 Reply

Loading