Towards Robust Graph Unlearning via Gradient Consistency Control

14 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: GRAPH UNLEARNING
Abstract: Recent graph unlearning models, which aim to efficiently remove undesired data by optimizing a unified objective of forget and retain losses, exhibit a critical vulnerability: their efficacy is severely compromised by inference-time noise attacks on node features. We are the first to diagnose that this fragility stems from a fundamental \textbf{gradient inconsistency} problem. Specifically, we theoretically and empirically demonstrate that within the unified optimization objective of graph unlearning, conventional robustness techniques such as adversarial smoothing are counterproductive: they exacerbate the \textbf{directional conflict} between the forget and retain gradients, leading to negative interference and failed optimization. To address this, we propose RUNNER, a novel framework for \textbf{R}obust graph \textbf{UN}learning via gradie\textbf{N}t consist\textbf{E}ncy Cont\textbf{R}ol. RUNNER resolves this conflict through a principled decoupling strategy comprising two core innovations: (1) a decoupled regularization scheme that independently stabilizes gradients from both the forget and retain losses against perturbations, and (2) a gradient alignment objective that penalizes inconsistent gradient between the two losses. Extensive experiments conducted on four real-world datasets demonstrate that RUNNER significantly enhances robustness against noise attacks while maintaining the model’s performance under noise-free conditions. Codes are available at \href{https://anonymous.4open.science/r/RUNNER-2FD7}{https://anonymous.4open.science/r/RUNNER-2FD7}.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 5144
Loading