Abstract: This paper addresses the challenge of graph domain adaptation on evolving, multiple out-of-distribution (OOD) graphs.
Conventional graph domain adaptation methods are confined to single-step adaptation, making them ineffective in handling continuous domain shifts and prone to catastrophic forgetting. This paper introduces the Graph Continual Adaptive Learning (GCAL) method, designed to enhance model sustainability and adaptability across various graph domains. GCAL employs a bilevel optimization strategy. The "adapt" phase uses an information maximization approach to fine-tune the model with new graph domains while re-adapting past memories to mitigate forgetting. Concurrently, the "generate memory" phase, guided by a theoretical lower bound derived from information bottleneck theory, involves a variational memory graph generation module to condense original graphs into memories. Extensive experimental evaluations demonstrate that GCAL substantially outperforms existing methods in terms of adaptability and knowledge retention.
Lay Summary: Many real-world networks—like social networks or recommendation systems—keep changing over time, so models trained on past data often struggle to work well on new, different graphs and tend to forget what they learned before. We introduce GCAL, a two-phase approach that alternates between “adapt,” where the model fine-tunes itself on incoming graph domains while re-visiting past memories to avoid forgetting, and “generate memory,” where a compact memory of the new graph data is created. Think of it like teaching someone new topics while making sure they remember what they already knew. This makes GCAL a promising step toward long-lasting, flexible graph learning in dynamic environments.
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Continual Learning, Domain adaptation, Graph neural network
Submission Number: 11285
Loading