SuperMPFL: A Supermask-Based Mechanism for Personalized Federated Learning

Zhe Sun, Shangzhe Li, Lihua Yin, Yahong Chen, Aohai Zhang, Meifan Zhang, Yuanyuan He

Published: 01 Jan 2025, Last Modified: 05 Nov 2025IEEE Transactions on Network and Service ManagementEveryoneRevisionsCC BY-SA 4.0
Abstract: Personalized federated learning (PFL) is a specialized application of the federated learning paradigm designed to support personalized use cases. Unlike traditional federated learning, which aims to train a high-quality global model, the goal of PFL is to tailor a model that best fits each individual user. Most existing PFL approaches adopt training architectures similar to those used in traditional federated learning, relying on global or partial model sharing during training. While this helps improve model personalization across clients, it also introduces a range of challenges, including risks of data leakage and increased communication overhead. To address these challenges, we propose a novel personalized federated learning (PFL) framework called SuperMPFL, which leverages supermasks to effectively tackle issues related to accuracy, privacy, and efficiency. In particular, the SuperMPFL technique utilizes masking and ranking strategies to obscure the true gradient information. By converting gradients into ranked numerical representations, this approach enhances privacy protection during the training process. Furthermore, this approach reduces communication overhead by transmitting significantly less information compared to conventional methods. In SuperMPFL, each client receives the global model and then emphasizes its personalized parameters, particularly at the model’s edges. This design not only improves accuracy but also strengthens robustness against privacy attacks. Evaluations on standard federated learning benchmarks demonstrate the superiority of our approach, which outperforms state-of-the-art methods in terms of accuracy, privacy, and efficiency.
Loading