OFMU: OPTIMIZATION-DRIVEN FRAMEWORK FOR MACHINE UNLEARNING

Published: 26 Jan 2026, Last Modified: 02 Mar 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: machine unlearning, large language models, privacy, bi-level optimization, convergence analysis, Trustworthy Machine Learning, Gradient-Based Methods, Safety in LLMs
TL;DR: We propose OFMU, a penalty-based bi-level optimization framework for machine unlearning that prioritizes forgetting while preserving utility, with provable convergence and state-of-the-art performance on large language models and vision tasks.
Abstract: Large language models deployed in sensitive applications increasingly require the ability to unlearn specific knowledge, such as user requests, copyrighted materi- als, or outdated information, without retraining from scratch to ensure regulatory compliance, user privacy, and safety. This task, known as machine unlearning, aims to remove the influence of targeted data (forgetting) while maintaining per- formance on the remaining data (retention). A common approach is to formu- late this as a multi-objective problem and reduce it to a single-objective prob- lem via scalarization, where forgetting and retention losses are combined using a weighted sum. However, this often results in unstable training dynamics and degraded model utility due to conflicting gradient directions. To address these challenges, we propose OFMU, a penalty-based bi-level optimization framework that explicitly prioritizes forgetting while preserving retention through a hierar- chical structure. Our method enforces forgetting via an inner maximization step that incorporates a similarity-aware penalty to decorrelate the gradients of the for- get and retention objectives, and restores utility through an outer minimization step. To ensure scalability, we develop a two-loop algorithm with provable conver- gence guarantees under both convex and non-convex regimes. We further provide a rigorous theoretical analysis of convergence rates and show that our approach achieves better trade-offs between forgetting efficacy and model utility compared to prior methods. Extensive experiments across vision and language benchmarks demonstrate that OFMU consistently outperforms existing unlearning methods in both forgetting efficacy and retained utility.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 12190
Loading