Abstract: Personalized Federated Learning (pFL) is among the most popular tasks in distributed deep learning, which compensates for mutual knowledge and enables device-specific model personalization. However, the effectiveness of pFL is severely impeded by challenges on fairness and significant communication overhead, where devices holding essential samples have to devote extensive resources for model training. To address these issues, we introduce the Fair and Communication-Efficient Personalized Federated Learning (FCE-PFL) framework, which harmonizes performance and device fairness while maintaining communication costs. Based on the Dempster-Shafer Theory, FCE-PFL employs assistance and contribution metrics to quantify the auxiliary information a client receives and provides. Then FCE-PFL balances and adjusts the involvement of devices in each training iteration, so as to facilitate fair training by constraining the maximum resource consumption per device and reduce the overall communication overhead. Our framework has been proven to be superior to existing methods in accuracy through rigorous experiments on the CIFAR10 and CIFAR100 datasets, demonstrating its potential as a fair and efficient pFL solution.
External IDs:dblp:conf/wasa/ZhengZWMZ24
Loading