CO-PFL: Contribution-Oriented Personalized Federated Learning for Heterogeneous Networks

16 Sept 2025 (modified: 12 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Personalized federated learning, data heterogenity
Abstract: Personalized federated learning (PFL) aims to collaboratively train personalized models for multiple clients with heterogeneous and scarce local samples. However, the substantial heterogeneity in sample distributions across clients undermines the effectiveness of vanilla federated learning where a consensus model is trained and shared among clients. More specifically, vanilla federated learning aggregates local models via heuristic or data-volume-based weighted averaging without considering the actual contribution per client's update, which often induces suboptimal personalization performance on heterogeneous client data. To improve the personalization performance, we propose a contribution-oriented PFL (CO-PFL) algorithm that jointly assesses gradient direction discrepancies and prediction deviations across client updates. In the proposed CO-PFL algorithm, we leverage information from both the gradient and data subspaces to estimate the contribution per client (i.e., the aggregation weight) for global aggregation. To further enhance personalization adaptability and optimization stability, our proposed CO-PFL algorithm cohesively integrates the parameter-wise personalization mechanism with the mask-aware momentum optimization. The proposed CO-PFL algorithm mitigates aggregation bias, enhances global coordination and local personalization performance, and facilitates tailored submodels construction alongside stable model updates. Extensive experiments on four practical datasets (e.g., CIFAR10, CIFAR10C, CINIC10, and M-ImageNet) demonstrate that the proposed CO-PFL consistently outperforms state-of-the-art benchmarks.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 7670
Loading