TCGU: Data-Centric Graph Unlearning Based on Transferable Condensation

Fan Li, Xiaoyang Wang, Dawei Cheng, Wenjie Zhang, Chen Chen, Ying Zhang, Xuemin Lin

Published: 01 Feb 2026, Last Modified: 21 Jan 2026IEEE Transactions on Knowledge and Data EngineeringEveryoneRevisionsCC BY-SA 4.0
Abstract: With growing demands for data privacy and model robustness, graph unlearning (GU), which erases the influence of specific data on trained GNN models, has gained significant attention. However, existing exact unlearning methods suffer from either low efficiency or poor model performance. While more utility-preserving and efficient, current approximate methods require access to the forget set during unlearning, which makes them inapplicable in immediate deletion scenarios, thereby undermining privacy. Additionally, these approximate methods, which attempt to directly perturb model parameters, still raise significant concerns regarding unlearning power in empirical studies. To fill the gap, we propose Transferable Condensation Graph Unlearning (TCGU), a data-centric solution to graph unlearning. Specifically, we first develop a two-level alignment strategy to pre-condense the original graph into a compact yet utility-preserving dataset for subsequent unlearning tasks. Upon receiving an unlearning request, we fine-tune the pre-condensed data with a low-rank plugin, to directly align its distribution with the remaining graph, thus efficiently revoking the information of deleted data without accessing them. A novel similarity distribution matching approach and a discrimination regularizer are proposed to effectively transfer condensed data and preserve its utility in GNN training, respectively. Finally, we retrain the GNN on the transferred condensed data. Extensive experiments on 7 benchmark datasets demonstrate that TCGU can achieve superior performance in terms of model utility, unlearning efficiency, and unlearning efficacy compared to existing GU methods. To the best of our knowledge, this is the first study to explore graph unlearning with immediate data removal using a data-centric approximate method.
Loading