Universal Graph Continual Learning

Published: 27 Nov 2023, Last Modified: 27 Nov 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: We address catastrophic forgetting issues in graph learning as the arrival of new data from diverse task distributions often leads graph models to prioritize the current task, causing them to forget valuable insights from previous tasks. Whereas prior studies primarily tackle one setting of graph continual learning such as incremental node classification, we focus on a universal approach wherein each data point in a task can be a node or a graph, and the task varies from node to graph classification. We refer to this setting as Universal Graph Continual Learning (UGCL), which includes node-unit node classification (NUNC), graph-unit node classification (GUNC), and graph-unit graph classification (GUGC). Our novel method maintains a replay memory of nodes and neighbours to remind the model of past graph structures through distillation. Emphasizing the importance of preserving distinctive graph structures across tasks, we enforce that coarse-to-grain graph representations stay close to previous ones by minimizing our proposed global and local structure losses. We benchmark our method against various continual learning baselines in 8 real-world graph datasets and achieve significant improvement in average performance and forgetting across tasks.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: - Improve the writing of the abstract and introduction - Improve the clarity of our Figures and equations - Add a new graph continual learning baseline ER-GNN (Zhang et al., 2022) - Add new experiments with different graph neural network backbones
Assigned Action Editor: ~Guillaume_Rabusseau1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1515
Loading