FedSycle: Mitigating Post-Unlearning Performance Inconsistency in Federated Learning via Latent Feature Decoupling

ICLR 2026 Conference Submission25324 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: post-unlearning performance; inconsistency
TL;DR: We propose a high-performance federated unlearning algorithm, ensuring model performance while reducing domain inconsistency, with theoretical convergence and experimental demonstration.
Abstract: Federated Learning (FL) safeguards data privacy by enabling collaborative model training without centralizing client data. The emerging 'Right to Be Forgotten' mandates necessitate Federated Unlearning (FU), allowing clients to revoke their data's influence on the global model. However, a critical yet overlooked challenge in FU is the emergence of performance inconsistency across clients following an unlearning event. When a client departs, the global model's accuracy can degrade unevenly for the remaining participants, leading to unfairness and disincentivizing collaboration. To address this, we propose FedSycle, a novel FU framework that leverages the power of pre-trained models to do fast retraining and enhance performance consistency. FedSycle operates by decoupling client data into distinct latent representations: one capturing semantic content (retained locally for privacy and to boost client-side retraining efficiency) and another capturing domain-specific attributes (e.g., texture, color). Crucially, only the less sensitive domain attributes are aggregated on the server. The server then utilizes these aggregated attributes to synthesize auxiliary data, which guides the global model update, effectively recalibrating its performance across all remaining client domains. We provide theoretical convergence guarantees for FedSycle. Extensive experiments on standard benchmarks (PACS, DomainNet) demonstrate its superiority. FedSycle not only achieves state-of-the-art unlearning effectiveness but also significantly mitigates performance inconsistency, reducing its variance by up to 83.2% compared to leading baselines, while simultaneously improving the average accuracy for non-target clients by over 31%.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 25324
Loading