GDR-GMA: Machine Unlearning via Direction-Rectified and Magnitude-Adjusted Gradients

Published: 20 Jul 2024, Last Modified: 06 Aug 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: As concerns over privacy protection grow and relevant laws come into effect, machine unlearning (MU) has emerged as a pivotal research area. Due to the complexity of the forgetting data distribution, the sample-wise MU is still open challenges. Gradient ascent, as the inverse of gradient descent, is naturally applied to machine unlearning, which is also the inverse process of machine learning. However, the straightforward gradient ascent MU method suffers from the trade-off between effectiveness, fidelity, and efficiency. In this work, we analyze the gradient ascent MU process from a multi-task learning (MTL) view. This perspective reveals two problems that cause the trade-off, i.e., the gradient direction problem and the gradient dominant problem. To address these problems, we propose a novel MU method, namely GDR-GMA, consisting of Gradient Direction Rectification (GDR) and Gradient Magnitude Adjustment (GMA). For the gradient direction problem, GDR rectifies the direction between the conflicting gradients by projecting a gradient onto the orthonormal plane of the conflicting gradient. For the gradient dominant problem, GMA dynamically adjusts the magnitude of the update gradients by assigning the dynamic magnitude weight parameter to the update gradients. Furthermore, we evaluate GDR-GMA against several baseline methods in three sample-wise MU scenarios: random data forgetting, sub-class forgetting, and class forgetting. Extensive experimental results demonstrate the superior performance of GDR-GMA in effectiveness, fidelity, and efficiency. Code is available at https://github.com/RUIYUN-ML/GDR-GMA.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Content] Media Interpretation, [Systems] Data Systems Management and Indexing
Relevance To Conference: In this work, we introduce a novel approach to machine unlearning, focusing on the selective removal of data from the machine learning models. Our method is particularly relevant to the evolving field of multimedia research, where data privacy and the right to be forgotten are of paramount importance. By facilitating the precise deletion of specific information without compromising the integrity of the remaining data, we contribute to enhancing data privacy protections in multimedia systems. Moreover, this capability for selective data removal introduces a new level of flexibility in multimedia data systems management, allowing for dynamic updates and modifications in response to user requests or legal requirements. This not only helps in safeguarding user privacy but also in maintaining the relevance and accuracy of multimedia content over time.
Supplementary Material: zip
Submission Number: 1860
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview