Keywords: graph class-incremental learning, parameter-efficient adaptation
Abstract: Graph Class-Incremental Learning (GCIL) seeks to learn novel classes sequentially while preserving knowledge acquired from previously seen classes. However, to tackle the pervasive challenge of catastrophic forgetting, recent GCIL methods often train separate classifiers from scratch for each task, which is redundant in design and computationally expensive. Moreover, isolating streaming data in different tasks hampers knowledge transfer across tasks. To address these dilemmas, we first propose Graph2Hyper, a parameter-efficient framework that utilizes a hypernetwork to generate task-specific classifiers on the fly based solely on the input graph of the current task. Concretely, the hypernetwork is composed of just two linear layers: a frozen, task-shared layer that preserves cross-task knowledge, and a trainable, task-specific layer that captures the unique characteristics of each task. To distinguish between different tasks in the incremental learning process, task-prototypes are extracted via pooling over global node representations, which capture task-specific contextual knowledge. To further model the association between tasks and corresponding classes, we construct class-prototypes with dynamic task-level bias through a learnable mapping function. By encoding class-level discrimination while retaining task-level context, the hypernetwork enables continual forget-free adaptation to new classes without the need for prototype rehearsal. Extensive experiments on four benchmark datasets demonstrate that Graph2Hyper achieves promising performance with superior parameter efficiency.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 17387
Loading