Keywords: Machine Learning, Graph Analysis, Out-of-distribution, Benchmark
Abstract: Out-of-distribution (OOD) learning deals with scenarios in which training and test data follow different distributions. Although general OOD problems have been intensively studied in machine learning, graph OOD is only an emerging area of research. Currently, there lacks a systematic benchmark tailored to graph OOD method evaluation. In this work, we aim at developing an OOD benchmark, known as GOOD, for graphs specifically. We explicitly make distinctions between covariate and concept shifts and design data splits that accurately reflect different shifts. We consider both graph and node prediction tasks as there are key differences in designing shifts. Overall, GOOD contains 11 datasets with 17 domain selections. When combined with covariate, concept, and no shifts, we obtain 51 different splits. We provide performance results on 10 commonly used baseline methods with 10 random runs. This results in 510 dataset-model combinations in total. Our results show significant performance gaps between in-distribution and OOD settings. Our results also shed light on different performance trends between covariate and concept shifts by different methods. Our GOOD benchmark is a growing project and expects to expand in both quantity and variety of resources as the area develops. The GOOD benchmark can be accessed via https://github.com/divelab/GOOD/.
URL: https://github.com/divelab/GOOD/
License: The code of GOOD is released under the GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007.
The datasets are under the MIT LICENSE.
Author Statement: Yes
Dataset Url: https://drive.google.com/drive/folders/1EcSGRkNxBLOUoRoLuhaazQZTKRGwaquX?usp=sharing
Supplementary Material: pdf
Contribution Process Agreement: Yes
In Person Attendance: Yes
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/good-a-graph-out-of-distribution-benchmark/code)
56 Replies
Loading