Stochastic Experience-Replay for Graph Continual Learning

Published: 16 Nov 2024, Last Modified: 26 Nov 2024LoG 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Continual Learning, Experience Replay
TL;DR: We propose Stochastic Experience Replay for GCL that parameterizes a kernel function to estimate the distribution density of condensed graphs for historical tasks.
Abstract: Experience Replay (ER) methods in graph continual learning (GCL) mitigate catastrophic forgetting by storing and replaying historical tasks. However, these methods often struggle with efficiently storing tasks in a compact memory buffer, affecting scalability. While recently proposed graph condensation techniques address this by summarizing historical graphs, they often inadequately capture variations within the distribution of historical tasks. In this paper, we propose a novel framework, called *Stochastic Experience Replay for GCL (SERGCL)*, by incorporating a *stochastic memory buffer (SMB)* that parameterizes a kernel function to estimate the distribution density of condensed graphs for each historical task. This allows efficient sampling of condensed graphs, leading to better coverage of historical tasks in the memory buffer and improved experience replay. Our experimental results on four benchmark datasets demonstrate that our proposed SERGCL framework achieves up to an 8.5% improvement of the *average performance* compared to the current state-of-the-art GCL models. Our code is available at: \href{https://github.com/jayjaynandy/sergcl}{https://github.com/jayjaynandy/sergcl}
Submission Type: Full paper proceedings track submission (max 9 main pages).
Software: https://github.com/jayjaynandy/sergcl
Poster: png
Poster Preview: png
Submission Number: 44
Loading