Fast Yet Effective Graph Unlearning through Influence AnalysisDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Abstract: Recent evolving data privacy policies and regulations have led to increasing interest in the machine unlearning problem. In this paper, we consider Graph Neural Networks (GNNs) as the target model, and study the problem of edge unlearning in GNNs, i.e., learning a new GNN model as if a specified set of edges never existed in the original training graph. Despite its practical importance, the problem remains elusive due to the non-convexity nature of GNNs. Our main technical contribution is three-fold: 1) we cast the problem of edge unlearning as estimating the influence functions of the edges to be removed; 2) we design a computationally and memory efficient algorithm named EraEdge for edge influence estimation and unlearning; 3) under standard regularity conditions, we prove that the sequence of iterates produced by our algorithm converges to the desired model. A comprehensive set of experiments on three prominent GNN models and four benchmark graph datasets demonstrate that our algorithm achieves significant speed-up gains over retraining from scratch without sacrificing the model accuracy too much. Furthermore, our algorithm outperforms the existing GNN unlearning approach in terms of both training time and accuracy of the target GNN model.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
12 Replies

Loading