Abstract: Federated Learning (FL) is typically deployed in a client-server architecture, which makes the Edge-Cloud architecture an ideal backbone for FL. A significant challenge in this setup arises from the diverse data feature distributions across different edge locations (i.e., non-IID data). In response, Personalized Federated Learning (PFL) approaches have been developed. Network segmentation-based PFL is an important approach to achieving PFL, in which the training network is divided into a global segment for server aggregation and a local segment maintained client-side. Existing methods determine the segmentation before the training, and the segmentation remains fixed throughout the PFL training. However, our investigation reveals that model representations vary as PFL progresses and the fixed segmentation may not deliver best performance across various training settings. To address this, we propose PFed-NS, a PFL framework based on adaptive network segmentation. This adaptive segmentation technique is composed of two elements: a mechanism for assessing divergence of clients’ probability density functions constructed from network layers’ outputs, and a model for dynamically establishing divergence thresholds, beyond which server aggregation is deemed detrimental. Further optimization strategies are proposed to reduce the computation and communication costs incurred by divergence modeling. Moreover, we propose a divergence-based BN strategy to optimize BN performance for network segmentation-based PFL. Extensive experiments have been conducted to compare PFed-NS against recent PFL models. The results demonstrate its superiority in enhancing model accuracy and accelerating convergence.
Loading