Online Learning of Graph Neural Networks: When Can Data Be Permanently DeletedDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: graph neural networks, online learning
Abstract: Online learning of graph neural networks (GNNs) faces the challenges of distribution shift and ever growing and changing training data, when temporal graphs evolve over time. This makes it inefficient to train over the complete graph whenever new data arrives. Deleting old data at some point in time may be preferable to maintain a good performance and to account for distribution shift. We systematically analyze these issues by incrementally training and evaluating GNNs in a sliding window over temporal graphs. We experiment with three representative GNN architectures and two scalable GNN techniques, on three new datasets. In our experiments, the GNNs face the challenge that new vertices, edges, and even classes appear and disappear over time. Our results show that no more than 50% of the GNN's receptive field is necessary to retain at least 95% accuracy compared to training over a full graph. In most cases, i.e., 14 out 18 experiments, we even observe that a temporal window of size 1 is sufficient to retain at least 90%.
One-sentence Summary: In online learning setups, GNNs need only very few past time steps to maintain a high accuracy.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=fJUo_w1znL
9 Replies

Loading