Hierarchical Prototype Network for Continual Graph Representation LearningDownload PDF

21 May 2021 (modified: 05 May 2023)NeurIPS 2021 SubmittedReaders: Everyone
Abstract: Despite significant advances in graph representation learning, little attention has been paid to graph data in which new categories of nodes (e.g., new research areas in citation networks or new types of products in co-purchasing networks) and their associated edges are continuously emerging. The key challenge is to incorporate the feature and topological information of new nodes in a continuous and effective manner such that performance over existing nodes is uninterrupted. To this end, we present Hierarchical Prototype Networks (HPNs) which can adaptively extract different levels of abstract knowledge in the form of prototypes to represent continually expanded graphs. Specifically, we first leverage a set of Atomic Feature Extractors (AFEs) to generate basic features which can encode both the elemental attribute information and the topological structure of the target node. Next, we develop HPNs by adaptively selecting relevant AFEs and represent each node with three-levels of prototypes, i.e., atomic-level, node-level, and class-level. In this way, whenever a new category of nodes is given, only the relevant AFEs and prototypes at each level will be activated and refined, while others remain uninterrupted. Finally, we provide the theoretical analysis on memory consumption bound and the continual learning capability of HPNs. Extensive empirical studies on eight different public datasets justify that HPNs are memory efficient and can achieve state-of-the-art performance on different continual graph representation learning tasks.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Supplementary Material: zip
16 Replies

Loading