Latency Minimization in Personalized Federated Learning-based Wireless Networks

Published: 2025, Last Modified: 10 Nov 2025ICC 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Personalized federated learning (PFL) addresses challenges in federated learning (FL), such as the low convergence rate and communication efficiency, often caused by data limitation and diversity. However, the effectiveness of PFL is often limited by the high learning latency due to insufficient bandwidth allocated and computation capacity, especially when many users participate. In response, we propose a cluster-based joint PFL and resource allocation algorithm “CFT-PRAUS”, which aims to reduce PFL learning latency while ensuring personalized model accuracy for all users. In CFT-PRAUS, users are clustered based on similarities in their data distribution. Then, a combined subgradient and PSO-based algorithm is proposed to select users and allocate bandwidth for each cluster. The selected users collaboratively train a common model, serving as the basis for personalized local adjustments by all users. This approach effectively reduces learning latency for model convergence in PFL while maintaining high accuracy across all users. Simulation results demonstrate that CFT-PRAUS significantly outperforms baseline methods in terms of latency and test accuracy, especially when the data distribution is non-IID.
Loading