AutoCoG: A Unified Data-Modal Co-Search Framework for Graph Neural NetworksDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: GCN, NAS
Abstract: Neural architecture search (NAS) has demonstrated success in discovering promising architectures for vision or language modeling tasks, and it has recently been introduced to searching for graph neural networks (GNNs) as well. Despite the preliminary success, we argue that for GNNs, NAS has to be customized further, due to the topological complicacy of GNN input data (graph) as well as the notorious training instability. Besides optimizing the GNN model architecture, we propose to simultaneously optimize the input graph topology, via a set of parameterized data augmentation operators. That yields AutoCoG, the first unified data-model co-search NAS framework for GNNs. By defining a highly flexible data-model co-search space, AutoCoG is gracefully formulated as a principled bi-level optimization, that can be end-to-end solved by the differential search methods. Experiments demonstrate that AutoCoG produces state-of-the-art performance at standard benchmarks including Cora, PubMed, and Citeseer, outperforming both state-of-the-art hand-crafted GNNs as well as recent GNN-NAS methods. AutoCoG can also scale to searching deeper GCNs in larger-scale datasets. Our method consistently achieves state-of-the-art (SOTA) results on Cora, Citeseer, Pubmed, and ogbn-arxiv. Specifically, we achieve gains of up to 2.04% for Cora, 2.54% for Citeseer, 2.08% for Pubmed, and finally 0.83% for ogbn-arxiv on our benchmarks.
One-sentence Summary: We propose AutoCoG, a NAS framework to unified model's architecture search and data augmentation policy in a differentiable manner.
Supplementary Material: zip
5 Replies

Loading