Abstract: Federated learning faces significant challenges in balancing communication efficiency, model accuracy, and privacy protection. While model compression effectively reduces communication overhead, existing approaches typically adopt a fixed compression rate, failing to dynamically balance compression efficiency and model performance while often overlooking privacy concerns. To address these issues, we propose FedCP—a Federated learning framework with personalized model Compression and Privacy protection. This novel framework integrates a Personalized Compression Mechanism (PCM) and an Optimized Piecewise noise Mechanism (OPM). PCM dynamically adjusts the model compression rate based on clients’ privacy budgets and communication costs, achieving an optimal trade-off between communication overhead and model accuracy. Since model compression inherently provides a degree of privacy protection, OPM further refines the noise injection strategy through mathematical formulations, optimizing the noise addition process for larger privacy budgets and enhancing model performance. Experimental results on multiple real-world datasets demonstrate that FedCP outperforms existing baseline methods in both model accuracy and communication efficiency while ensuring rigorous privacy protection. This paper presents an effective solution to communication optimization in federated learning and introduces a more advanced privacy-preserving mechanism with significant theoretical and practical implications.
External IDs:dblp:conf/icccn/DingXXW25
Loading