Sufficient Subgraph Embedding Memory for Continual Graph Representation LearningDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Keywords: Graph, Class-incremental learning, continual learning, network
Abstract: Memory replay, which constructs a buffer to store representative samples and retrain the model over the buffer to maintain its performance over existing tasks, has shown great success for continual learning with Euclidean data. Directly applying it to graph data, however, can lead to the memory explosion problem due to the necessity to consider explicit topological connections of representative nodes. To this end, we present Parameter Decoupled Graph Neural Networks (PDGNNs) with Sufficient Subgraph Embedding Memory (SSEM) to fully utilize the explicit topological information for memory replay and reduce the memory space complexity from $\mathcal{O}(nd^L)$ to $\mathcal{O}(n)$, where $n$ is the memory buffer size, $d$ is the average node degree, and $L$ is the range of neighborhood aggregation. Specifically, PDGNNs decouple trainable parameters from the computation subgraphs via $\textit{Sufficient Subgraph Embeddings}$ (SSEs), which compress subgraphs into vectors ($\textit{i.e.}$, SSEs) to reduce the memory consumption. Besides, we discover a $\textit{pseudo-training effect}$ in memory based continual graph learning, which does not exist in continual learning on Euclidean data without topological connection ($\textit{e.g.}$, individual images). Based on the discovery, we develop a novel $\textit{coverage maximization sampling}$ strategy to enhance the performance when the memory budget is tight. Thorough empirical studies demonstrate that PDGNNs with SSEM outperform state-of-the-art techniques for both class-incremental and task-incremental settings.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
23 Replies

Loading