Abstract: Federated learning, as a distributed machine learning paradigm, enhances privacy protection but faces the challenge of heterogeneity. Data-free knowledge distillation (DFKD) methods attempt to overcome this challenge by using a generator to synthesize samples for fine-tuning a global model. However, these methods often suffer from significant shifts in output distribution, leading to catastrophic forgetting. To tackle these issues, we propose FedFLD, a novel federated forget-less distillation framework that mitigates catastrophic forgetting in DFKD while addressing the heterogeneity challenge. Specifically, FedFLD guides the generator’s training from three key aspects and constrains the output distribution with an elastic weight consolidation penalty term. By synthesizing diverse samples from different perspectives through additional generator updates, FedFLD facilitates effective knowledge distillation from local models to the global model. Additionally, the global model is further optimized via the heterogeneity fine-tuning process, mitigating the bias from heterogeneity and resulting in a more expressive and robust model.
Loading