A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on GraphsDownload PDF

Anonymous

Sep 29, 2021 (edited Oct 06, 2021)ICLR 2022 Conference Blind SubmissionReaders: Everyone
  • Keywords: Graph Neural Networks, Distribution Shifts, Out-of-Distribution Generalization
  • Abstract: Distribution shifts, in which the training distribution differs from the testing distribution, can significantly degrade the performance of Graph Neural Networks (GNNs). Although some existing graph classification benchmarks consider distribution shifts, we are far from understanding the effects of distribution shifts on graphs, and more specifically how they differ from distribution shifts in tensor data like images. We ask: (1) how useful are existing domain generalization methods for tackling distribution shifts on graph data? (2) are GNNs capable of generalizing to test graphs from unseen distributions? As a first step to answering these questions, we curate GDS, a benchmark of 8 datasets reflecting a diverse range of distribution shifts across graphs. We observe that in most cases, we need both a suitable domain generalization algorithm and a strong GNN backbone model to optimize out-of-distribution test performance. However, even if we carefully pick such combinations of models and algorithms, the out-of-distribution performance is still much lower than the in-distribution performance. This large gap emphasizes the need for domain generalization algorithms specifically tailored for graphs and strong GNNs that generalize well to out-of-distribution graphs. To facilitate further research, we provide an open-source package that administers the GDS benchmark with modular combinations of popular domain generalization algorithms and GNN backbone models.
  • One-sentence Summary: We study the out-of-distribution generalization problem on graphs by thoroughly examining existing algorithms and GNN models on datasets with a diverse range of distribution shifts curated by us.
5 Replies

Loading