Keywords: Graph Neural Networks, Distribution Shifts, Out-of-Distribution Generalization
TL;DR: We study the out-of-distribution generalization problem on graphs by thoroughly examining existing domain generalization algorithms, graph augmentation techniques, and GNN models on datasets with a diverse range of distribution shifts curated by us.
Abstract: Distribution shifts, in which the training distribution differs from the testing distribution, can significantly degrade the performance of Graph Neural Networks (GNNs). We curate GDS, a benchmark of eight datasets reflecting a diverse range of distribution shifts across graphs. We observe that: (1) most domain generalization algorithms fail to work when applied to domain shifts on graphs; and (2) combinations of powerful GNN models and augmentation techniques usually achieve the best out-of-distribution performance. These emphasize the need for domain generalization algorithms tailored for graphs and further graph augmentation techniques that enhance the robustness of predictors.