Abstract: Centralized storage in web-based services poses privacy risks for diverse user data, whereas personalized federated learning (PFL) arnesses the inherent privacy-preserving benefits of FL through distributed collaborative training, avoiding the centralization of sensitive data. Besides, PFL also customizes models to individual client needs, enabling personalized web services on non-independent and identically distributed data across varied environments. Existing PFL techniques often rely on prior knowledge to identify personalization layers, overlooking variable layer sensitivities to heterogeneous data, thus constraining the model's adaptability to diverse statistical distributions and limiting its service effectiveness. To overcome this issue, we introduce FedLD, a layer-wise decoupling method for PFL that precisely captures layer-specific sensitivities to heterogeneous data, enhancing service-oriented model personalization. Firstly, FedLD employs a hypernetwork to evaluate each layer's contribution to performance across varied data distributions, determining the optimal proportion of personalized channels per layer for each client, thus enabling fine-grained, channel-level parameter adjustments tailored to individual service needs. Secondly, knowledge distillation is introduced to enforce consistency between the representations of personalized and shared weights, promoting collaboration between them. Thirdly, an improved aggregation strategy is proposed to enhance collaboration among clients with similar data distributions. Extensive experimental results show that FedLD achieves more effective and generalized results compared to baseline methods across various types of heterogeneous data settings.
External IDs:dblp:conf/icws/ChengDWXLLX25
Loading