Keywords: Federated Domain Unlearning
Abstract: Federated Learning (FL) has emerged as a powerful training paradigm that coordinates multiple clients to collaboratively train a shared model while preserving data privacy. The Right to Be Forgotten (RTBF), a key provision in many data protection regulations, calls for effective approaches to remove, or unlearn specific training data from the learned FL model. Thus, different federated unlearning techniques are proposed to effectively remove the influence of specific data and preserve the global model's performance. However, existing federated unlearning approaches primarily develop and test in single-domain scenarios, and their effectiveness in multi-domain environments remains unverified. In such heterogeneous scenarios, domain differences pose significant challenges not only to the unlearning process itself but also to the methods used for verifying whether unlearning has been successful. This raises a critical question: can traditional unlearning validation methods, originally designed for single-domain tasks, still provide reliable assessments in multi-domain scenarios? Given the prevalence of multi-domain data in real-world applications, addressing these challenges is crucial for the practical deployment of federated unlearning. In this paper, we address these critical gaps by presenting the first comprehensive empirical study on Federated Domain Unlearning. We systematically analyze the characteristics, limitations, and effectiveness of current unlearning and validation techniques under multi-domain conditions. Additionally, we propose novel validation methodologies explicitly tailored for Federated Domain Unlearning, facilitating precise assessment and verification of domain-specific data removal without compromising the overall integrity and performance of the global model.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 11257
Loading