Abstract: Federated learning inherently provides a certain level of privacy protection, which however is often inadequate in many real-world scenarios. Existing privacy-preserving methods frequently incur unbearable time overheads or result in non-negligible deterioration to model performance, thus suffering from the tradeoff between performance and privacy. In this work, we propose a novel Federated Privacy-Preserving Knowledge Transfer framework, namely FedPPKT, which employs data-free knowledge distillation in a meta-learning manner to rapidly generates pseudo data and performs privacy-preserving knowledge transfer. FedPPKT establishes a protective barrier between the original private data and the federated model, thereby ensuring user privacy. Furthermore, leveraging the few-round strategy of FedPPKT, it has the capability to reduce the number of communication rounds, further mitigating the risk of privacy exposure for user data. With the help of the meta generator, the problem of uneven local label distribution on clients is alleviated, mitigating data heterogeneity and improving model performance. Experiments show that FedPPKT outperforms the state-of-the-art privacy-preserving federated learning methods. Our code is publicly available at https://github.com/HIT-weiqb/FedPPKT.
Loading