Understanding Federated Unlearning through the Lens of Memorization

ICLR 2026 Conference Submission10555 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: federated learning, federated unlearning, memorization
TL;DR: We show that federated unlearning should remove only uniquely memorized information in the unlearning dataset, and our FedMemEraser method achieves this efficiently with strong fairness, generalization, and near-retraining performance.
Abstract: Federated learning (FL) must support unlearning to meet privacy regulations. However, existing federated unlearning approaches may overlook the overlapping information between the unlearning and retained datasets, leading to ineffective unlearning and unfairness between clients. We revisit this problem through the lens of memorization, showing that only unique memorization information from the unlearning dataset should be removed, while shared patterns should remain. Subsequently, we propose the Grouped Memorization Evaluation, a metric that distinguishes memorized from shared knowledge, and introduce Federated Memorization Eraser (FedMemEraser), a pruning-based method that resets redundant parameters carrying memorization information. The experimental results demonstrate that our method closely matches the retraining baselines and effectively eliminates memorization information compared to other unlearning algorithms.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 10555
Loading