FTF-ER: Feature-Topology Fusion-Based Experience Replay Method for Continual Graph Learning

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Continual graph learning (CGL) is an important and challenging task that aims to extend static GNNs to dynamic task flow scenarios. As one of the mainstream CGL methods, the experience replay (ER) method receives widespread attention due to its superior performance. However, existing ER methods focus on identifying samples by feature significance or topological relevance, which limits their utilization of comprehensive graph data. In addition, the topology-based ER methods only consider local topological information and add neighboring nodes to the buffer, which ignores the global topological information and increases memory overhead. To bridge these gaps, we propose a novel method called Feature-Topology Fusion-based Experience Replay (FTF-ER) to effectively mitigate the catastrophic forgetting issue with enhanced efficiency. Specifically, from an overall perspective to maximize the utilization of the entire graph data, we propose a highly complementary approach including both feature and global topological information, which can significantly improve the effectiveness of the sampled nodes. Moreover, to further utilize global topological information, we propose Hodge Potential Score (HPS) as a novel module to calculate the topological importance of nodes. HPS derives a global node ranking via Hodge decomposition on graphs, providing more accurate global topological information compared to neighbor sampling. By excluding neighbor sampling, HPS significantly reduces buffer storage costs for acquiring topological information and simultaneously decreases training time. Compared with state-of-the-art methods, FTF-ER achieves a significant improvement of 3.6% in AA and 7.1% in AF on the OGB-Arxiv dataset, demonstrating its superior performance in the class-incremental learning setting.
Primary Subject Area: [Content] Multimodal Fusion
Secondary Subject Area: [Content] Multimodal Fusion
Relevance To Conference: Our research primarily focuses on Multimodal Processing (MP). The following outlines our contributions to MP, covering three key aspects: research domain, methodology, and datasets. Our research domain Continual Graph Learning (CGL) focuses on efficiently processing and updating graph data over time to adapt to new information and dynamic environments. CGL contributes to MP by providing a dynamically updated structured information framework, which facilitates the adaptation and understanding of complex, multi-source data environments in graph learning. Besides, our work introduces FTF-ER, a method enhancing CGL by integrating feature and topological data, comparable to multimodal fusion in MP. This method significantly enhances multimodal fusion by enabling the combination of diverse data types from complex, evolving datasets for richer analysis and interaction. FTF-ER blends various data to improve learning outcomes and memory efficiency, offering a new perspective and technical pathway for the multimodal processing domain. Furthermore, the graph datasets we utilize cover various domains, including social networks, online shopping, and academic networks. These areas are rich in multimodal data, implying that our approach can be applied across a multitude of multimodal scenarios to enhance system performance.
Supplementary Material: zip
Submission Number: 4071
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview