Abstract: To address the challenges of hardware heterogeneity in Federated Learning (FL), several model-heterogeneous FL schemes have been proposed based on the traditional model-homogeneous approaches. Among the state-of-the-art (SOTA) model-heterogeneous FL approaches, the Partial Training (PT) approach is considered one of the most promising approaches, where submodels are extracted from the global model for local training. However, existing studies focus on either the submodel extraction scheme or the creation of personalized submodels for each client, which lack global model updating or introduce high computational complexity. This can result in poor adaptability, especially in edge computing environments with Non-IID data distribution. In this paper, we presented CDPFL, Contribution-Driven Personalization for Model Heterogeneous Federated Learning, in which the contributions made by the local clients to the global model are evaluated using the Shapley Value. Using the contribution information, Gate Recurrent Unit (GRU) is then used to determine the weight of each client in the next round of model aggregation. In this way, CDPFL is capable of controlling the update of the global model based on the contribution information. To evaluate CDPFL, we compare it against the SOTA PT-based methods. Experimental results show that our approach achieves an improvement of up to 10.17% in global model accuracy under high data heterogeneity scenarios and consistently outperforms all baselines in both high and low heterogeneity scenarios.
External IDs:dblp:conf/ijcnn/ChenZB25
Loading