Can LLMs Alleviate Catastrophic Forgetting in Graph Continual Learning? A Systematic Study

03 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: graph continual learning;graph neural networks;large language models
TL;DR: Benchmarking LLMs and graph foundation models in continual learning on graph-structured data.
Abstract: Nowadays, real-world data, including graph-structure data, often arrives in a streaming manner, which means that learning systems need to continuously acquire new knowledge without forgetting previously learned information. Although substantial existing works attempt to address catastrophic forgetting in graph machine learning, they are all based on training from scratch with streaming data. With the rise of pretrained models, an increasing number of studies have leveraged their strong generalization ability for continual learning. Therefore, in this work, we attempt to answer whether large language models (LLMs) can mitigate catastrophic forgetting in graph continual learning}. We first evaluate the performance of LLMs and graph foundation models in graph continual learning scenarios, and found that with minimal modifications, they can easily achieve state-of-the-art results. Moreover, we found that certain current settings for graph continual learning tasks have significant flaws; it is possible to achieve zero forgetting with simple manipulations. Finally, based on extensive experiments, we propose a simple-yet-effective method, Simple Grpah Continual Learning (SimGCL), that surpasses the previous state-of-the-art baselines by around 20% under the rehearsal-free constraint.
Primary Area: datasets and benchmarks
Submission Number: 1449
Loading