Dynamic Graph Unlearning: A General and Efficient Post-Processing Method via Gradient Transformation
Track: Security and privacy
Keywords: Dynamic Graphs, Unlearning, Privacy, GNN, Trustworthiness
Abstract: Dynamic graph neural networks (DGNNs) have emerged and been widely deployed in various web applications (e.g., Reddit) to serve users (e.g., personalized content delivery) due to their remarkable ability to learn from complex and dynamic user interaction data. Despite benefiting from high-quality services, users have raised privacy concerns, such as misuse of personal data (e.g., dynamic user-user/item interaction) for model training, requiring DGNNs to ''forget'' their data to meet AI governance laws (e.g., the ''right to be forgotten" in GDPR). However, current static graph unlearning studies cannot $\textit{unlearn dynamic graph elements}$ and exhibit limitations such as the model-specific design or reliance on pre-processing, which disenable their practicability in dynamic graph unlearning. To this end, we study the dynamic graph unlearning for the first time and propose an $\textit{effective}$, $\textit{efficient}$, $\textit{general}$, and $\textit{post-processing}$ method to implement DGNN unlearning. Specifically, we first formulate dynamic graph unlearning in the context of continuous-time dynamic graphs, and then propose a method called Gradient Transformation that directly maps the unlearning request to the desired parameter update. Comprehensive evaluations on six real-world datasets and state-of-the-art DGNN backbones demonstrate its effectiveness (e.g., limited drop or obvious improvement in utility) and efficiency (e.g., 7.23$\times$ speed-up) advantages. Additionally, our method has the potential to handle future unlearning requests with significant performance gains (e.g., 32.59$\times$ speed-up).
Submission Number: 737
Loading