FedDFQ : Personalized Federated Learning Based On Data Feature Quantification

27 Sept 2024 (modified: 14 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federal Learning, Model Aggregation, Data Feature Utilization
Abstract: Personalized federated learning is widely used for heterogeneous data distributions across clients. However, existing methods are difficult to measure and utilize these heterogeneities accurately. To address this issue, in this paper, we propose a novel and efficient method named FedDFQ which uses a customized Data Identity Extraction Module (DIEM) to dynamically generate metric proxies that quantify data heterogeneity across different local clients in a privacy-friendly manner. The metric proxies are used to assess the contributions of global parameter aggregation and personalized gradient backpropagation for each local client. In addition, we design a plug-and-play Automatic Gradient Accumulation Module (AGAM) that regularizes personalized classification layers with re-balanced gradients. We provide theoretical explanations and experimental results that validate the effectiveness of the proposed FedDFQ. With comprehensive comparisons to existing state-of-the-art approaches, FedDFQ outperforms them on two benchmark datasets in different heterogeneous scenarios.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9728
Loading