Memory-Aware Low-Rank Selective Unlearning for Efficient Foundation Model Adaptation

07 May 2026 (modified: 09 May 2026)ICML 2026 Workshop CoLoRAI SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Low-Rank Adaptation, LoRA, Machine Unlearning, Parameter-Efficient Fine-Tuning, Foundation Models, Selective Unlearning, Responsible AI, Memory-Aware Learning
TL;DR: We propose MALSU, a memory-aware framework for selective forgetting inside LoRA adapter spaces that enables efficient unlearning while preserving retained model capabilities.
Abstract: Large foundation models are commonly adapted using parameter-efficient fine-tuning methods such as Low-Rank Adaptation (LoRA), where task-specific knowledge is encoded through compact low-rank updates rather than full-model parameter changes. While this improves efficiency, it also creates a new challenge: how can a model selectively forget sensitive, copyrighted, biased, or undesirable knowledge after adaptation without retraining the full model or damaging retained capabilities? We propose Memory-Aware Low-Rank Selective Unlearning (MALSU), a framework for targeted forgetting directly inside LoRA adapter subspaces. MALSU treats low-rank adapters as compact memory carriers and optimizes a combined objective with three components: a forgetting loss on target examples, a retention loss on preserved examples, and a memory-budget regularizer that constrains low-rank parameter drift. Unlike traditional unlearning methods that require global parameter modification, MALSU performs forgetting only inside low-rank adaptation modules while keeping the backbone frozen. We provide the formulation, algorithm, evaluation protocol, and illustrative experimental behavior of MALSU. Our results suggest that low-rank representations can function not only as efficient adaptation mechanisms, but also as controllable interfaces for scalable machine unlearning and modular memory editing in foundation models.
Submission Number: 71
Loading