ReFNet: Rehearsal-based graph lifelong learning with multi-resolution framelet graph neural networks

Published: 01 Jan 2025, Last Modified: 25 Jan 2025Inf. Sci. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph lifelong learning (GLL), also known as graph continual or incremental learning, focuses on adapting to new tasks presented by emerging graph data while preserving the model's performance on existing tasks. A pivotal element in GLL is the graph neural network (GNN) module, which plays an essential role in the framework's overall effectiveness. Nevertheless, the majority of GNNs are designed with a single-resolution approach to feature extraction, which restricts their capacity to simultaneously detect fine-grained local details (high resolution) and broader structural patterns like clusters and communities (low resolution). Given the diverse graph instances and distributions encountered in GLL, fixed handcrafted transforms fall short in generating effective multi-resolution representations tailored to each graph instance. In this paper, we propose ReFNet, a rehearsal-based GLL framework that leverages stochastic configuration scheme and multi-resolution framelet-based GNNs. A key innovation in ReFNet is the introduction of a framelet-based graph learning module that employs both low-pass and high-pass filters to efficiently extract low-resolution and high-resolution representations. To ensure the smooth operation of this module within the GLL context, we implement a coverage-based diversity approach, which considers both the representativeness of classes and the diversity within each class of replayed nodes. Additionally, a graph structure learning strategy is integrated to ensure that replayed nodes are connected to truly informative neighbors. The experiment results on multiple benchmark datasets, compared against various baselines, validate the effectiveness of ReFNet, while also underscoring the importance of multi-resolution representations within the graph learning module.
Loading