Keywords: Federated Domain Unlearning
Abstract: Federated Learning (FL) enables distributed clients to collaboratively train machine learning models without sharing raw data, enhancing user privacy. However, stringent data protection regulations, such as the GDPR, mandate the erasure of certain domain-specific knowledge from trained models, raising the critical challenge of federated domain unlearning. Unlike traditional federated unlearning approaches that focus on removing data at the client, class, or sample level within homogeneous domains, federated domain unlearning aims to selectively remove learned knowledge associated with entire data domains, which frequently differ across clients in real-world settings. To address this challenge, we \underline{F}ederated Domain \underline{U}nlearning via \underline{D}omain-aware \underline{W}eight \underline{S}urgery (\texttt{FU-DWS}), a novel framework that leverages channel activation patterns to identify domain-specific weights and applies differential update strategies based on their importance. \texttt{FU-DWS} performs ``surgical'' weight modifications by precisely measuring channel-level domain sensitivity, then selectively pruning and fine-tuning only the components strongly associated with the forgetting domain while preserving knowledge critical to retained domains. Comprehensive evaluation against six baselines across three domain-heterogeneous datasets demonstrates that \texttt{FU-DWS} significantly outperforms existing methods in both unlearning effectiveness and computational efficiency, while maintaining stronger performance on retained domains.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 1432
Loading