Abstract: Federated learning is a distributed learning paradigm where a global model is trained using data samples from multiple clients but without the necessity of sharing raw data samples. However, it comes with several significant challenges in system designs, data quality, and communications. Recent research highlights a significant concern related to data privacy leakage through reserve-engineering model gradients at a malicious server. Moreover, a global model cannot provide good utility performance for individual clients when the local training data is heterogeneous in terms of quantity, quality, and distribution. Hence, personalized federated learning is highly desirable in practice to tailor the trained model for local usage. In this article, we propose privacy-preserving and personalized federated learning, a unified federated learning framework to simultaneously address privacy preservation and personalization. The intuition of our framework is to learn part of the model gradients at the server and the rest of the gradients at the local clients. To evaluate the effectiveness of the proposed framework, we conduct extensive experiments across four image classification data sets to show that our framework yields better privacy and personalization performance compared to the existing methods. We also claim that privacy preservation and personalization are essentially two facets of deep learning models, offering a unique perspective on their intrinsic interrelation.
Loading